This AI Prediction was made by Stuart Russell in 2015.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Some have argued that there is no conceivable risk to humanity for centuries to come, perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.
Opinion about the Intelligence Explosion from Stuart Russell:
Flycer’s explanation for better understanding:
People have argued that there is no risk to humanity for centuries to come. However, history has shown that confident assertions can be proven wrong in a short amount of time. For example, the time between Rutherford’s assertion about atomic energy and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than 24 hours.
The future of humanity with AGI / HLMI / transformative AI:
I don’t think this is an easy problem in practice. Humans are inconsistent, irrational, and weak-willed, and human values exhibit, shall we say, regional variations. Moreover, we don’t yet understand the extent to which improving the decision-making capabilities of the machine may increase the downside risk of small errors in value alignment. Nevertheless, there are reasons for optimism.
Flycer’s Secondary Explanation:
The problem of aligning machine values with human values is difficult due to human inconsistencies, irrationality, and regional variations in values. Improving machine decision-making capabilities may also increase the risk of small errors in value alignment. Despite these challenges, there are still reasons for optimism.
Stuart Russell is a renowned computer scientist and professor of Electrical Engineering and Computer Science at the University of California, Berkeley. He is also a faculty member of the Berkeley Artificial Intelligence Research (BAIR) Lab and the Center for Human-Compatible Artificial Intelligence (CHAI).Born in Portsmouth, England, in 1962, Russell received his Bachelor’s degree in Physics from Oxford University in 1982 and his Ph.D. in Computer Science from Stanford University in 1986. He then joined the faculty at Berkeley, where he has been teaching and conducting research for over three decades.Russell’s research interests include artificial intelligence, machine learning, robotics, and computational biology. He is best known for his contributions to the field of AI, particularly his work on probabilistic reasoning and decision-making under uncertainty. He is also a co-author of the leading textbook on AI, “Artificial Intelligence: A Modern Approach,” which has been translated into 13 languages and is widely used in universities around the world.In addition to his academic work, Russell is a Fellow of the Association for Computing Machinery (ACM) and the American Association for Artificial Intelligence (AAAI). He has also received numerous awards and honors for his contributions to the field of AI, including the ACM/AAAI Allen Newell Award, the IJCAI Computers and Thought Award, and the IEEE Intelligent Systems AI’s 10 to Watch Award.Russell is a sought-after speaker and has given talks at conferences and events around the world. He is also a frequent commentator on the societal implications of AI and has written several articles and op-eds on the subject. His work has been featured in numerous media outlets, including The New York Times, The Wall Street Journal, and The Economist.Overall, Stuart Russell is a leading figure in the field of AI and has made significant contributions to the development of intelligent systems that are safe, reliable, and beneficial to humanity.
Keywords: risk, humanity, machine