This AI Prediction was made by Kristinn R. Thorisson in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?Kristinn R. Thorisson10%: 202550%: 204590%: 2080
Opinion about the Intelligence Explosion from Kristinn R. Thorisson:
Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?Kristinn R. Thorisson: I suspect that the task of making the next leap in building an AI becomes exponentially more difficult as intelligence grows, so if it took 100 years to develop a human-level (measured roughly) AI system from the time when software was automatically running on a computer (around the middle of the 20th century), then the next milestone of roughly equal significance will be reached roughly 100 years later
Flycer’s explanation for better understanding:
Kristinn R. Thorisson predicts that there is a 10% chance of developing artificial intelligence that is as good as humans at science, mathematics, engineering, and programming by 2025, a 50% chance by 2045, and a 90% chance by 2080. He also believes that building an AI that is substantially better than humans at these activities becomes exponentially more difficult as intelligence grows, and the next milestone of equal significance will likely take roughly 100 years to reach.
The future of humanity with AGI / HLMI / transformative AI:
What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)Kristinn R. Thorisson: Very low, approaching zero.
Flycer’s Secondary Explanation:
Kristinn R. Thorisson believes that the probability of human extinction within 100 years due to AI capable of self-modification is very low, approaching zero. He does not believe that it is provably non-dangerous, but still thinks the likelihood of extinction is extremely low.
About:
Kristinn R. Thorisson is an AI researcher and entrepreneur who has been involved in several projects related to the development of intelligent systems and AI technologies.Thorisson is known for his work on the ALICE project, which is a large-scale research initiative that is focused on developing intelligent agents and systems. He has also been involved in several other projects related to AI and cognitive science, including the development of natural language processing systems and the study of human-machine interaction.
Source: http://kruel.co/2012/08/15/qa-with-experts-on-risks-from-ai-5/
Keywords: artificial intelligence, human-level, self-modification