Brandon Rohrer, Tim Finin, Pat Hayes on 2012

This AI Prediction was made by Brandon Rohrer, Tim Finin, Pat Hayes in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence? […]Brandon Rohrer: 2032/2052/2072Tim Finin: 20/100/200 yearsPat Hayes: If by ‘human-level’ you mean, the AI will be an accurate simalcrum of a human being, or perhaps a human personality (as is often envisioned in science fiction, eg HAL from “2001”) my answer would be, never.

 

 

Opinion about the Intelligence Explosion from Brandon Rohrer, Tim Finin, Pat Hayes:

that probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?[…]Brandon Rohrer: < 1%Tim Finin: 0.0001/0.0001/0.01Pat Hayes: Again, zero.

 

Flycer’s explanation for better understanding:

Experts were asked to predict the year when there would be a 10%, 50%, and 90% chance of developing human-level machine intelligence assuming no global catastrophe. Brandon Rohrer predicted 2032, 2052, and 2072 respectively. Tim Finin predicted 20, 100, and 200 years respectively, while Pat Hayes believed that human-level AGI would never be achieved.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of badly done AI?[…]Brandon Rohrer: < 1%Tim Finin: 0.001Pat Hayes: Zero. The whole idea is ludicrous.

 

Flycer’s Secondary Explanation:

Experts were asked about the probability of human extinction due to poorly designed AI. Brandon Rohrer assigned less than 1%, Tim Finin assigned 0.001%, and Pat Hayes assigned zero, stating that the idea is ridiculous.

 

 

About:

Brandon Rohrer:Brandon Rohrer is a data scientist and machine learning expert who is known for his work on the practical applications of AI in industry. He has worked as a data scientist at Microsoft, where he led the data science team for the Azure cloud platform. He is also the founder of iPythia, a consultancy that provides data science and machine learning services to companies.Rohrer is particularly interested in the use of AI to solve real-world problems, such as improving healthcare outcomes and reducing energy consumption. He is a frequent speaker at conferences and has published several articles on the practical applications of AI.Tim Finin:Tim Finin is a computer scientist and professor at the University of Maryland, Baltimore County, where he is the director of the Maryland Cybersecurity Center. He is known for his work on artificial intelligence and natural language processing, and has developed several AI systems that can analyze and interpret natural language.Finin is also known for his work on cybersecurity, and has developed several tools and techniques for analyzing and detecting cyber threats. He has received numerous awards and honors for his research, including the ACM SIGART Autonomous Agents Research Award.Pat Hayes:Pat Hayes is a philosopher and computer scientist who is known for his work on knowledge representation and reasoning. He has held faculty positions at several universities, including the University of Edinburgh, Stanford University, and Carnegie Mellon University.Hayes is particularly interested in the development of formal systems for representing and reasoning about knowledge, and has developed several influential knowledge representation languages, including the frame-based language KL-ONE. He has also made significant contributions to the development of description logic, which is a family of formal languages for representing knowledge.

 

 

 

 

 

Source: https://www.lesswrong.com/posts/okmpRuKjhG9dvDh3Z/q-and-a-with-experts-on-risks-from-ai-1

 

 

Keywords: machine intelligence, human-level, self-modify