This AI Prediction was made by Kaj Sotala in 2011.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
n/a
Opinion about the Intelligence Explosion from Kaj Sotala:
I meant to say that it does not seem unreasonable to me that an AGI might take five years to self-improve. 1% does seem unreasonably low. I’m not sure what probability I would assign to “superhuman AGI in 5 years”, but under say 40% seems quite low.
Flycer’s explanation for better understanding:
An AGI taking five years to self-improve does not seem unreasonable. A probability of 1% for a superhuman AGI in five years is too low. A probability of 40% or less for a superhuman AGI in five years is more reasonable.
The future of humanity with AGI / HLMI / transformative AI:
What I suspect – and hope, since it might give humanity a chance – to happen is that some AGI will begin a world-takeover attempt, but then fail due to some epistemic equivalent of a divide-by-zero error, falling prey to Pascal’s mugging or something.Then again, it might fail, but only after having destroyed humans while in the process.
Flycer’s Secondary Explanation:
An AGI may attempt to take over the world, but could fail due to an epistemic error. It is possible that the AGI could still cause destruction before failing. There is a chance that humanity could be saved if the AGI fails early enough.
About:
Kaj Sotala is a researcher in artificial intelligence safety, focusing on developing methods to ensure that future AI systems remain safe and beneficial. He received his M.Sc. in Computer Science from the Helsinki University of Technology and his Ph.D. in Cognitive Science from the University of Helsinki.Sotala’s research interests include AI safety, existential risks, and the social implications of AI. He has published several papers on these topics, including “Defining AI Ethics and Safety” and “Responses to Catastrophic AGI Risk: A Survey.” Sotala is also an active member of the AI safety community and has organized workshops and conferences on the topic.In terms of AI predictions, Sotala has emphasized the need for AI researchers and policymakers to take the potential risks of AI seriously. He argues that AI has the potential to cause catastrophic harm if not developed and deployed carefully. Sotala has also advocated for the development of international agreements and regulations to govern AI research and deployment.
Source: https://www.lesswrong.com/posts/j5ComXKhingWjqSgA/q-and-a-with-michael-littman-on-risks-from-ai
Keywords: AGI, Self-improvement, Pascal’s Mugging