Mark Changizi on 2012

This AI Prediction was made by Mark Changizi in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?Mark Changizi: 100, 1000, 5000

 

 

Opinion about the Intelligence Explosion from Mark Changizi:

What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?Mark Changizi: Zero, if it means self-modification to become better at the wide range of reasoning.

 

Flycer’s explanation for better understanding:

Mark Changizi predicts that there is a 10% chance of artificial intelligence (AI) being developed that is as good as humans in science, mathematics, engineering, and programming by 100 years from now. He assigns a 0% probability to the possibility of AI self-modifying to become better at a wide range of reasoning within a matter of hours, days, or less than five years. These predictions assume beneficial political and economic development and no global catastrophe halting progress.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? […]Mark Changizi: 1:10^{6}

 

Flycer’s Secondary Explanation:

Mark Changizi assigns a probability of 1 in 1 million to the possibility of human extinction within 100 years due to self-modifying AI. This probability assumes that the AI is not provably non-dangerous, which may not be possible.

 

 

About:

Mark Changizi is a cognitive scientist and AI researcher who has been involved in several projects related to the study of human perception and the development of intelligent systems.Changizi is known for his work on the development of cognitive architectures and other AI technologies that are inspired by human cognition. He has also been involved in several projects related to the study of the evolution of intelligence and the relationship between intelligence and creativity.

 

 

 

 

 

Source: http://kruel.co/2012/12/19/qa-with-mark-changizi-on-risks-from-ai/

 

 

Keywords: artificial intelligence, self-modification, human extinction