John E. Laird on 2012

This AI Prediction was made by John E. Laird in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?[…]John E. Laird: […]10% 20 years50% 50 years90% 80 years

 

 

Opinion about the Intelligence Explosion from John E. Laird:

What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?John E. Laird: […]0% – There is no reason to believe that an AGI could do this. First, why would an AGI be able to learn faster than humans.

 

Flycer’s explanation for better understanding:

#ERROR!

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of badly done AI?[…]John E. Laird: 0% – I don’t see the development of AGI leading to this. There are other dangers of AI, where people (or governments) use the power that can be gained from machine intelligence to their own ends (financially, politically, …) that could end very badly (destruction of communication networks – bring down governments and economies) but the doomsday scenarios of Terminator and the Matrix just don’t make sense for many reasons.

 

Flycer’s Secondary Explanation:

John E. Laird believes that the probability of human extinction due to badly done AI is 0%. He does not see the development of AGI leading to this outcome. However, he acknowledges that there are other dangers of AI, such as people or governments using the power gained from machine intelligence for their own ends, which could have negative consequences.

 

 

About:

John E. Laird is a computer scientist and professor at the University of Michigan, where he has been involved in several AI-related projects and initiatives.Laird is known for his work on Soar, which is a cognitive architecture that is used to build intelligent systems that can reason, plan, and learn. He has also worked on several projects related to robotics, including the development of a robot that can play soccer.

 

 

 

 

 

Source: http://kruel.co/2012/08/15/qa-with-experts-on-risks-from-ai-5/

 

 

Keywords: machine intelligence, self-modify, human extinction