This AI Prediction was made by Alan Bundy in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
n/a
Opinion about the Intelligence Explosion from Alan Bundy:
Not provided
Flycer’s explanation for better understanding:
The future of humanity with AGI / HLMI / transformative AI:
Quite stupid machines entrusted to run a war with weapons of mass destruction could cause quite enough havoc without waiting for the mythical “human-level machine intelligence”. It will be human owners that endow their machines with goals and aims. The less intelligent the machines the more likely this is to end in tears.Given the indeterminacy of their owner’s intentions, it’s quite impossible to put probabilities on the questions you ask.
Flycer’s Secondary Explanation:
Machines with weapons of mass destruction can cause significant damage without human-level intelligence. The goals and aims of machines are determined by their human owners, and less intelligent machines are more likely to cause problems. It is impossible to predict the outcomes of machines with indeterminate owner intentions.
About:
Alan Bundy is a computer scientist and professor at the University of Edinburgh. He is known for his work on automated theorem proving, which is the use of computers to prove mathematical theorems.Bundy has developed several automated theorem provers, including the Otter system and the PVS system. He has also been involved in the development of several large-scale AI projects, including the Edinburgh Informatics Forum and the Edinburgh Principles of AI initiative.
Source: https://www.lesswrong.com/posts/Jv9kyH5WvqiXifsWJ/q-and-a-with-experts-on-risks-from-ai-3
Keywords: machines, human-level machine intelligence, goals and aims.