Leo Pape, Donald Loveland on 2012

This AI Prediction was made by Leo Pape, Donald Loveland in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?[…]Leo Pape: For me, roughly human-level machine intelligence is an embodied machine. Given the current difficulties of making such machines I expect it will last at least several hundred years before human-level intelligence can be reached. […]Donald Loveland: […] The Turing test will be passed in its simplest form perhaps in 20 years. Full functional replacements for humans will likely take over 100 years (50% likelihood). 200 years (90% likelihood).

 

 

Opinion about the Intelligence Explosion from Leo Pape, Donald Loveland:

Not provided

 

Flycer’s explanation for better understanding:

Experts were asked to predict the year in which there would be a 10%, 50%, and 90% chance of developing roughly human-level machine intelligence assuming no global catastrophe. Leo Pape believes it will take several hundred years to reach human-level intelligence due to the current difficulties of making embodied machines. Donald Loveland predicts that the Turing test will be passed in its simplest form in 20 years, but full functional replacements for humans will likely take over 100 years with a 50% likelihood and 200 years with a 90% likelihood.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of badly done AI?[…]Leo Pape: Human beings are already using all sorts of artificial intelligence in their (war)machines, so there it is not impossible that our machines will be helpful in human extinction.Donald Loveland: Ultimately 95% (and not just by bad AI, but just by generalized evolution). In other words, in this sense all AI is badly done AI for I think it is a natural sequence that AI leads to superior artificial minds that leads to eventual evolution, or replacement (depending on the speed of the transformation), of humans to artificial life.

 

Flycer’s Secondary Explanation:

Experts predict a high probability of human extinction due to badly done AI. Leo Pape suggests that machines used in warfare could contribute to this possibility. Donald Loveland believes that AI will eventually lead to the evolution or replacement of humans by superior artificial minds, resulting in a 95% chance of extinction.

 

 

About:

Leo Pape is a computer scientist and AI researcher who has worked on a variety of topics in the field of AI, including automated reasoning, natural language processing, and knowledge representation.Pape is known for his work on the Cyc project, which is a large-scale knowledge base and reasoning system. He has also worked on several projects related to the development of intelligent tutoring systems, including the development of a system for teaching medical diagnosis.Donald Loveland:Donald Loveland is a computer scientist and philosopher who has worked on a variety of topics in the field of AI, including automated reasoning, formal methods, and logic.Loveland is known for his work on the resolution principle, which is a fundamental method for automated reasoning. He has also worked on several projects related to the development of theorem provers and automated reasoning systems, including the development of the Otter theorem prover. In addition to his work in AI, Loveland has also made significant contributions to the philosophy of mathematics and logic.

 

 

 

 

 

Source: https://www.lesswrong.com/posts/7nPtpmBwoiQWDKvKz/q-and-a-with-experts-on-risks-from-ai-4

 

 

Keywords: machine intelligence, human-level intelligence, human extinction