This AI Prediction was made by Vladimir Nesov in 2009.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Opinion about the Intelligence Explosion from Vladimir Nesov:
Flycer’s explanation for better understanding:
The future of humanity with AGI / HLMI / transformative AI:
I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 […]On second thought, I should lower my confidence from these explicit models, they seem too much like planning. Make that 50%.
Flycer’s Secondary Explanation:
I have estimated that 30% of the population will not be killed off by 2100. This estimate is based on the potential for Unfriendly AI, with an estimate of 50% of AGI foom by 2070 or 70% by 2100. After further consideration, I have lowered my confidence in these explicit models and revised my estimate to 50%.
Vladimir Nesov is a mathematician and AI researcher who has worked on a variety of topics related to artificial intelligence and game theory. He is the creator of the Alignment Forum, a community dedicated to developing and discussing approaches to ensuring that advanced AI systems align with human values. Nesov has also contributed to the development of the LessWrong community and blog, which focuses on topics such as rationality, AI safety, and decision theory. He has written extensively on these subjects and is known for his rigorous and analytical approach to AI research.
Keywords: Unfriendly AI, AGI, Planning