Abram Demski on 2012

This AI Prediction was made by Abram Demski in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence? […]Abram Demski:10%: 5 years (2017).50%: 15 years (2027).90%: 50 years (2062).

 

 

Opinion about the Intelligence Explosion from Abram Demski:

What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years? […]Abram Demski: Very near zero, very near zero, and very near zero. My feeling is that intelligence is a combination of processing power and knowledge.

 

Flycer’s explanation for better understanding:

Abram Demski predicts that there is a 10% chance of developing human-level machine intelligence by 2017, a 50% chance by 2027, and a 90% chance by 2062. However, he assigns a very low probability to the possibility of a human-level AGI self-modifying to achieve massive superhuman intelligence in a short period of time. Demski believes that intelligence is a combination of processing power and knowledge.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of badly done AI?Abram Demski:So, keeping in mind that it’s very rough, let’s say .001.I note that this is significantly lower than estimates I’ve made before, despite trying harder at that time to refute the hypothesis.

 

Flycer’s Secondary Explanation:

Abram Demski assigns a probability of .001 to the possibility of human extinction due to poorly executed AI. This estimate is lower than his previous estimates, despite his efforts to refute the hypothesis.

 

 

About:

Abram Demski is an AI researcher and philosopher who has been involved in several projects related to the study of decision theory and the development of advanced AI technologies.Demski is known for his work on the development of reflective decision-making systems and other advanced AI architectures. He has also been involved in several projects related to the study of the ethical and social implications of advanced AI technologies, and has written extensively on the subject.

 

 

 

 

 

Source: https://www.lesswrong.com/posts/kToFGkGj5u5eYLJPF/q-and-a-with-abram-demski-on-risks-from-ai

 

 

Keywords: machine intelligence, self-modify, human extinction