This AI Prediction was made by Michael G. Dyer, John Tromp, Kevin Korb, Peter Gacs, Eray Ozkural, Laurent Orseau, Richard Loosemore, Monica Anderson in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?Kevin Korb: 2050/2200/2500 […]John Tromp: […]I will not even attempt projections beyond my lifetime (let’s say beyond 40 years).Michael G. Dyer: See Ray Kurzweil’s book: The Singularity Is Near.As I recall, he thinks it will occur before mid-century.I think he is off by at least an additional 50 years (but I think we’ll have as manypersonal robots as cars by 2100.)[…]Peter Gacs: I cannot calibrate my answer as exactly as the percentages require, so I willjust concentrate on the 90%. […] I am very cautious with numbers, and will say that at least 80 more years are needed before jokes about the stupidity of machines will become outdated.Eray Ozkural: 2025/2030/2045. […]Laurent Orseau:10%: 201750%: 203290%: 2100Richard Loosemore: 2015 – 2020 – 2025Monica Anderson:10% 202050% 202690% 2034
Opinion about the Intelligence Explosion from Michael G. Dyer, John Tromp, Kevin Korb, Peter Gacs, Eray Ozkural, Laurent Orseau, Richard Loosemore, Monica Anderson:
What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?Kevin Korb: If through nanorecording: approx 0%. Otherwise, the speed/acceleration at which AGIs improve themselves is hard to guess at.John Tromp: I expect such modification will require plenty of real-life interaction.hours: 10^-9days: 10^-6<5 years: 10^-1Peter Gacs: This question presupposes a particular sci-fi scenario that I do not believe in.Eray Ozkural: In 5 years, without doing anything, it would already be faster than a human simply by running on a faster computer. If Moore's law continued by then, it would be 20-30 times faster than a human. But if you mean by "vastly" a difference of thousand times faster, I give it a probability of only 10% because there might be other kinds of bottlenecks involved (mostly physical). […]Laurent Orseau: […] My guess is that we will make relatively slow progress. This progress can get faster with time, but I don't expect any sudden explosion […]Richard Loosemore: Depending on the circumstances (which means, this will not be possible if the AI is built using dumb techniques) the answer is: near certainty.Monica Anderson: 0.00%
Flycer’s explanation for better understanding:
Experts were asked to assign probabilities to the development of artificial intelligence (AI) that is as good as humans at science, mathematics, engineering and programming. Estimates ranged from 2017 to 2500, with most suggesting the mid-century. When asked about the probability of an AI with human-level competence at general reasoning self-modifying to become vastly superhuman, estimates ranged from 0% to near certainty, with most suggesting it is hard to predict.
The future of humanity with AGI / HLMI / transformative AI:
That probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? […]Kevin Korb: […] My generic answer is that we have every prospect of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all. We should, of course, take up those prospects and make sure we do a good job rather than a bad one.John Tromp: The ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime.Michael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance). […]As to extinction, we will only not go extinct if our robot masters decide to keep some of us around). […]Peter Gacs: I give it a probability near 1%. Humans may become irrelevant in the sense of losing their role of being at the forefront of the progress of “self-knowledge of the universe” (whatever this means). […] Of course, species do die out daily even without our intent to extinguish them, but I assume that at least some humans would find ways to survive for some more centuries to come.Eray Ozkural: […] Therefore, it’s a gamble at best, and even if we made a life-loving, information-loving, selfless, autonomous AI as I suggested, it might still do a lot of things that many people would disagree with. And although such an AI might not extinguish our species, it might decide, for instance, that it would be best to scan and archive our species for using later. […]Laurent Orseau: It depends if we consider that we will simply leave safety issues aside before creating an AGI, thinking that all will go well, or if we take into account that we will actually do some research on that. […] So I think the risks of human extinction will be pretty low, as long as we take them into account seriously.Richard Loosemore: The question is loaded, and I reject the premises. It assumes that someone can build an AI that is both generally intelligent (enough to be able to improve itself) whilst also having a design whose motivation is impossible to prove. That is a false assumption. People who try to build AI systems with the kind of design whose motivation is unstable will actually not succeed in building anything that has enough general intelligence to become a danger.Monica Anderson: 0.00%. All intelligences must be fallible in order to deal with a complex and illogical world (with only incomplete information available) on a best effort basis.
Flycer’s Secondary Explanation:
Experts were asked to assign a probability to the possibility of human extinction within 100 years as a result of AI capable of self-modification. Responses ranged from 0.00% to near 100%, with some experts believing that humans have every prospect of building an AI that behaves reasonably vis-a-vis humans, while others believe that loss of human dominance is a foregone conclusion and that humans may only survive if our robot masters decide to keep some of us around. The risks of human extinction were deemed to be pretty low as long as they are taken into account seriously.
About:
Michael G. Dyer:Michael G. Dyer is a computer scientist and professor at the University of California, Los Angeles. He is known for his work on AI planning, which is the development of systems that can reason about actions and change.Dyer has developed several AI planning systems, including the HSP planner and the JSHOP2 planner. He has also worked on the development of autonomous robots and has been involved in several robotics projects, including the development of the Soarbot robot.John Tromp:John Tromp is a computer scientist and game developer who is known for his work on game theory and artificial intelligence. He is the creator of the game of Clobber and has developed several AI agents for playing the game.Tromp has also been involved in the development of several AI-related projects, including the development of algorithms for solving combinatorial games and the development of techniques for compressing data.Kevin Korb:Kevin Korb is a computer scientist and professor at Monash University in Australia. He is known for his work on Bayesian networks, which are a type of probabilistic graphical model used in machine learning and AI.Korb has developed several Bayesian network tools and has been involved in the development of several AI projects, including the development of a Bayesian network approach to modeling argumentation and the development of a Bayesian network approach to modeling causal relationships.Peter Gacs:Peter Gacs is a computer scientist and professor at the Boston University. He is known for his work on algorithmic information theory, which is the study of the mathematical properties of algorithms.Gacs has made significant contributions to the development of algorithmic randomness theory and has also worked on the development of quantum computing algorithms. He has been recognized for his contributions to the field with several awards and honors, including the Knuth Prize and the IEEE Information Theory Society Shannon Award.Eray Ozkural:Eray Ozkural is a computer scientist and entrepreneur who has been involved in several AI-related projects, including the development of natural language processing tools and the development of autonomous agents for gaming.Ozkural is particularly interested in the development of AGI systems and has proposed several approaches to achieving this, including the development of recursive self-improvement algorithms. He has also been involved in the development of several startups focused on AI and has co-founded the AGI Society, which is a community of researchers and enthusiasts interested in AGI.Laurent Orseau:Laurent Orseau is a computer scientist and research scientist at DeepMind, where he is known for his work on safe and beneficial AI. He has been involved in the development of several AI-related projects, including the development of AlphaGo and the development of reinforcement learning algorithms.Orseau is particularly interested in the development of AI systems that can reason about their own goals and behavior, and has proposed several approaches to ensuring that such systems act in safe and beneficial ways. He has also been involved in the development of several papers and talks on AI safety and ethics.Richard Loosemore:Richard Loosemore is a computer scientist and researcher who has been involved in several AI-related projects, including the development of autonomous agents and the development of safe and beneficial AI.Loosemore is particularly interested in the development of AI systems that can reason about their own goals and behavior, and has proposed several approaches to ensuring that such systems act in safe and ethical ways. He has also been involved in the development of several papers and talks on AI safety and ethics.Monica Anderson:Monica Anderson is a computer scientist and professor at the University of Alabama at Birmingham. She is known for her work on AI planning, which is the development of systems that can reason about actions and change.Anderson has developed several AI planning systems, including the SHOP2 planner and the GAT system. She has also been involved in the development of several projects related to automated reasoning and knowledge representation, including the development of the OntoClean methodology and the representation of legal knowledge in AI systems.
Source: https://www.lesswrong.com/posts/7nPtpmBwoiQWDKvKz/q-and-a-with-experts-on-risks-from-ai-4
Keywords: artificial intelligence, self-modification, human extinction