Pei Wang, J. Storrs Hall, Paul Cohen on 2012

This AI Prediction was made by Pei Wang, J. Storrs Hall, Paul Cohen in 2012.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?Pei Wang: My estimations are, very roughly, 2020/2030/2050, respectively. […]J. Storrs Hall: 2020 / 2030 / 2040 […]Paul Cohen: […] If you are asking when machines will function as complete, autonomous scientists (or anything else) I’d say there’s little reason to think that that’s what we want.

 

 

Opinion about the Intelligence Explosion from Pei Wang, J. Storrs Hall, Paul Cohen:

What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?Pei Wang: […] It is possible for AI systems to become more and more capable, but I don't think they will become completely uncontrollable or incomprehensible.J. Storrs Hall: This depends entirely on when it starts, i.e. what is the current marginal cost of computation along the Moore's Law curve. […]Paul Cohen: The first step is the hardest: "human level competence at general reasoning" is our greatest challenge. I am quite sure that anything that could, say, read and understand what it reads would in a matter of days, weeks or months become vastly more generative than humans

 

Flycer’s explanation for better understanding:

Experts in artificial intelligence were asked to estimate the year in which there would be a 10%, 50%, and 90% chance of AI being as good as humans at science, mathematics, engineering, and programming. Pei Wang estimated 2020, 2030, and 2050 respectively, while J. Storrs Hall estimated 2020, 2030, and 2040. Paul Cohen argued that the focus should not be on creating machines that function as complete, autonomous scientists, but rather on achieving human-level competence at general reasoning.

 

 

The future of humanity with AGI / HLMI / transformative AI:

What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)Pei Wang: […] AI systems, like all major scientific and technical results, can lead to human extinction, but it is not the reason to stop or pause this research. […]J. Storrs Hall: This is unlikely but not inconceivable. If it happens, however, it will be because the AI was part of a doomsday device probably built by some military for “mutual assured destruction”, and some other military tried to call their bluff. […]To sum up, AIs can and should be vetted with standard and well-understood quality assurance and testing techniques, but defining “friendliness to the human race”, much less proving it, is a pipe dream.Paul Cohen: From where I sit today, near zero. Besides, the danger is likely to be mostly on the human side: Irrespective of what machines can or cannot do, we will continue to be lazy, self-righteous, jingoistic, squanderers of our tiny little planet.

 

Flycer’s Secondary Explanation:

Experts have varying opinions on the probability of human extinction as a result of AI capable of self-modification. Pei Wang believes that while it is possible, it should not stop AI research. J. Storrs Hall thinks it is unlikely but not inconceivable, and if it were to happen, it would likely be due to a doomsday device built by the military. Paul Cohen believes the danger is more likely to come from human behavior than from machines.

 

 

About:

Sure, here are the extended bios for the remaining individuals you mentioned:Pei Wang:Pei Wang is a computer scientist and professor at Temple University, where he is the director of the Knowledge Engineering Laboratory. He is known for his work on artificial general intelligence (AGI), which is the development of machines that can perform a wide range of cognitive tasks.Wang is particularly interested in the use of cognitive architectures, which are computational models of the human mind, to develop AGI systems. He has developed several cognitive architectures, including the Non-Axiomatic Reasoning System (NARS), which is designed to learn and reason with uncertain and incomplete information.J. Storrs Hall:J. Storrs Hall, also known as Josh Hall, is a computer scientist, author, and inventor. He is known for his work on nanotechnology, molecular manufacturing, and artificial intelligence.Hall has been a long-time advocate of safe and beneficial AI, and has written extensively on the subject. He has argued that it is possible to develop AI systems that are aligned with human values and goals, and has proposed several approaches to achieving this, including the development of a “value alignment module” for AI systems.Hall is also the author of several books on nanotechnology and molecular manufacturing, including “Nanofuture: What’s Next For Nanotechnology” and “Beyond AI: Creating the Conscience of the Machine”.Paul Cohen:Paul Cohen is a computer scientist and professor at the University of Arizona. He is known for his work on machine learning, natural language processing, and knowledge representation.Cohen has been a vocal critic of the hype surrounding AI, arguing that the field still has a long way to go before machines can match human intelligence. He has also been a strong advocate for the development of AI systems that are transparent and accountable, and has proposed several approaches to achieving this, including the use of “explainable AI” techniques.Cohen is the author of several books and papers on machine learning and natural language processing, including “Empirical Methods for Artificial Intelligence” and “Natural Language Processing for Prolog Programmers”. He has also been recognized for his contributions to the field with several awards and honors, including the AAAI Classic Paper Award and the IJCAI Computers and Thought Award.

 

 

 

 

 

Source: https://www.lesswrong.com/posts/Jv9kyH5WvqiXifsWJ/q-and-a-with-experts-on-risks-from-ai-3

 

 

Keywords: artificial intelligence, human-level competence, self-modification