This AI Prediction was made by Larry Wasserman in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming beneficial political and economic development and that noglobal catastrophe halts progress, by what year would you assign a10%/50%/90% chance of the development of artificial intelligence thatis roughly as good as humans (or better, perhaps unevenly) at science,mathematics, engineering and programming?Larry Wasserman:10%: 202550%: 204090%: 2070
Opinion about the Intelligence Explosion from Larry Wasserman:
What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better,perhaps unevenly) at general reasoning (including science,mathematics, engineering and programming) to self-modify its way up tovastly superhuman capabilities within a matter of hours/days/< 5years?Larry Wasserman:hours: 10ys: 50%years: 99%
Flycer’s explanation for better understanding:
Larry Wasserman predicts that there is a 10% chance of artificial intelligence (AI) being as good as humans in science, math, engineering, and programming by 2025, a 50% chance by 2040, and a 90% chance by 2070. He also predicts that there is a 99% chance of AI self-modifying to vastly superhuman capabilities within five years, a 50% chance within days, and a 10% chance within hours. These predictions assume beneficial political and economic development and no global catastrophe.
The future of humanity with AGI / HLMI / transformative AI:
What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable ofself-modification (that is not provably non-dangerous, if that is evenpossible)? […]Larry Wasserman:I would say low, perhaps 1%. However, since I think the line betweenhumans and AI will be blurry, the question may not be well-defined.
Flycer’s Secondary Explanation:
Larry Wasserman believes that the probability of human extinction within 100 years due to self-modifying AI is low, around 1%. However, he notes that the line between humans and AI may become blurry, making the question less well-defined.
About:
Larry Wasserman is a statistician and machine learning researcher who has been involved in several projects related to the development of statistical models and machine learning algorithms.Wasserman is known for his work on the development of non-parametric methods for statistical inference and the study of causal inference in machine learning. He has also been involved in several projects related to the use of machine learning in areas such as speech recognition and natural language processing.
Source: http://kruel.co/2012/11/04/qa-with-larry-wasserman-on-risks-from-ai/
Keywords: artificial intelligence, self-modification, human extinction