Nils John Nilsson, Peter J. Bentley, David Alan, Plaisted, Hector Levesque on 2012

This AI Prediction was made by Nils John Nilsson, Peter J. Bentley, David Alan, Plaisted, Hector Levesque in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?[…]Nils Nilsson:[…]I’ll rephrase your question to be: When will AI be able to perform around 80% of these jobs as well or better than humans perform?10% chance: 203050% chance: 205090% chance: 2100David Plaisted: It seems that the development of human level intelligence is always later than people think it will be. I don’t have an idea how long this might take.Hector Levesque:No idea.

 

 

Opinion about the Intelligence Explosion from Nils John Nilsson, Peter J. Bentley, David Alan, Plaisted, Hector Levesque:

What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?Nils Nilsson: I'll assume that you mean sometime during this century, and that my "employment test" is the measure of superhuman intelligence.hours: 5ys: 50%<5 years: 90vid Plaisted: This would require a lot in terms of robots being able to build hardware devices or modify their own hardware. I suppose they could also modify their software to do this, but right now it seems like a far out possibility.Peter J. Bentley: It won't happen. Has nothing to do with internet connections or speeds. The question is rather silly.Hector Levesque: Good. An automated human level intelligence is achieved, it ought to be able to learn what humans know more quickly.

 

Flycer’s explanation for better understanding:

Experts were asked to assign probabilities to the development of human-level machine intelligence. Nils Nilsson estimated a 10% chance by 2030, a 50% chance by 2050, and a 90% chance by 2100. Hector Levesque asked about the possibility of a human-level AGI self-modifying to achieve superhuman intelligence, with Nilsson estimating a 5% chance within hours, 50% within days, and 90% within five years.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of badly done AI?[…]Nils Nilsson: 0.01% probability during the current century. Beyond that, who knows?David Plaisted: I think people will be so concerned about the misuse of intelligent computers that they will take safeguards to prevent such problems. To me it seems more likely that disaster will come on the human race from nuclear or biological weapons, or possibly some natural disaster.Peter J. Bentley: If this were ever to happen, it is most likely to be because the AI was too stupid and we relied on it too much. It is *extremely* unlikely for any AI to become “self aware” and take over the world as they like to show in the movies. It’s more likely that your pot plant will take over the world.Hector Levesque: Low. The probability of human extinction by other means (e.g. climate problems, micro biology etc) is sufficiently higher that if we were to survive all of them, surviving the result of AI work would be comparatively easy.

 

Flycer’s Secondary Explanation:

Experts in the field of AI assign a low probability to the possibility of human extinction as a result of AI in the current century, with estimates ranging from 0.01% to extremely unlikely. They believe that the greater threat to humanity comes from nuclear or biological weapons, natural disasters, or climate problems. Additionally, they suggest that the risk of AI causing harm can be mitigated through safeguards and not relying too heavily on AI.

 

 

About:

Nils John Nilsson:Nils John Nilsson is a computer scientist and professor emeritus at Stanford University, where he was a faculty member from 1966 to 1995. He is known for his work on AI planning and decision-making, and has developed several influential planning systems, including the STRIPS system.Nilsson is also known for his work on robotics, and has developed several robots that are capable of performing complex tasks in real-world environments. He has received numerous awards and honors for his contributions to the field of AI, including the ACM Turing Award in 1994.Peter J. Bentley:Peter J. Bentley is a computer scientist and professor at University College London, where he is the director of the Digital Biology Laboratory. He is known for his work on evolutionary algorithms, which are computational techniques for solving complex optimization problems.Bentley is particularly interested in the use of evolutionary algorithms for solving real-world problems, such as designing new drugs and optimizing energy consumption. He has also developed several creative AI systems, including a program that can compose music and a system that can generate new designs for clothing.David Alan Plaisted:David Alan Plaisted is a computer scientist and professor at the University of North Carolina at Chapel Hill. He is known for his work on automated theorem proving, which is the use of computers to prove mathematical theorems.Plaisted is particularly interested in the development of efficient algorithms for automated theorem proving, and has developed several influential systems for this purpose, including the SETHEO and E systems. He has also made significant contributions to the development of decision procedures for first-order logic, which is a fundamental area of mathematical logic.Hector Levesque is a computer scientist and professor at the University of Toronto, where he is the director of the Laboratory for Computational Intelligence. He is known for his work on knowledge representation and reasoning, and has developed several influential formalisms for representing and reasoning about knowledge.Levesque is particularly interested in the development of formal systems for reasoning about actions and change, which is an important area of AI planning. He has also made significant contributions to the development of the Situation Calculus, which is a formalism for reasoning about actions and change.Levesque has received numerous awards and honors for his research, including the ACM SIGART Autonomous Agents Research Award and the Canadian AI Association Lifetime Achievement Award. He is also a Fellow of the Association for Computing Machinery (ACM) and the American Association for Artificial Intelligence (AAAI).

 

 

 

 

 

Source: https://www.lesswrong.com/posts/xoxZdRtpyRnXmhher/q-and-a-with-experts-on-risks-from-ai-2

 

 

Keywords: machine intelligence, human-level, probability