This AI Prediction was made by Michael Littman in 2011.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?Michael Littman:10%: 2050 (I also think P=NP in that year.)50%: 206290%: 2112
Opinion about the Intelligence Explosion from Michael Littman:
What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?Michael Littman: epsilon (essentially zero). I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection.
Flycer’s explanation for better understanding:
Michael Littman assigns a 10% chance of the development of roughly human-level machine intelligence by 2050, a 50% chance by 2062, and a 90% chance by 2112. He assigns a very small probability to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years. He believes that intelligence cannot be turbocharged by introspection, even superhuman introspection.
The future of humanity with AGI / HLMI / transformative AI:
What probability do you assign to the possibility of human extinction as a result of badly done AI?Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)I think complete human extinction is unlikely, but, if society as we know it collapses, it’ll be because people are being stupid (not because machines are being smart).
Flycer’s Secondary Explanation:
Michael Littman assigns a very low probability to human extinction caused by badly done AI. He believes that any collapse of society would be due to human error, not AI. Overall, he believes that complete human extinction is unlikely.
About:
Michael Littman is a professor of Computer Science at Brown University. He obtained his Ph.D. in Computer Science from Brown University in 1996. Littman’s research interests lie at the intersection of Artificial Intelligence, Machine Learning, and Decision Making under Uncertainty. His work has contributed significantly to the development of reinforcement learning, a subfield of machine learning where an agent learns how to act in an environment by receiving feedback in the form of rewards.Littman is also known for his research on computational sustainability, where he applies AI and machine learning techniques to address environmental problems. He has received numerous awards for his research, including the International Joint Conference on Artificial Intelligence (IJCAI) Computers and Thought Award in 2007.In terms of AI predictions, Littman has expressed concerns about the societal impact of AI, especially in the context of autonomous weapons. In an interview with Brown University, Littman stated that “the ethical implications of autonomous weapons are profound, and we need to think carefully about how to limit their use.” He has also emphasized the need for transparency and accountability in AI systems to prevent their misuse.
https://www.youtube.com/watch?v=c9AbECvRt20
Source: https://www.lesswrong.com/posts/j5ComXKhingWjqSgA/q-and-a-with-michael-littman-on-risks-from-ai
Keywords: Machine Intelligence, Human Level AGI, Human Extinction