This AI Prediction was made by William Uther in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
n/a
Opinion about the Intelligence Explosion from William Uther:
In general, if the system has ‘human level’ AGI, then surely it will behave the same way as a human. In which case none of your scenarios are likely – I’ve had an internet connection for years and I’m not super-human yet.
Flycer’s explanation for better understanding:
If a system has human-level AGI, it will behave like a human. Therefore, none of the scenarios mentioned are likely. The author has had an internet connection for years and has not become super-human.
The future of humanity with AGI / HLMI / transformative AI:
I do not believe that mankind will build AI systems that will systematically seek out and deliberately destroy all humans (e.g. ‘Skynet’), and I further believe that if someone started a system like this it would be destroyed by everyone else quite quickly.It isn’t hard to build in an ‘off’ switch.
Flycer’s Secondary Explanation:
The author does not think that humans will create AI systems with the intention of destroying all humans. They also believe that if someone did create such a system, it would be quickly destroyed by others. Additionally, the author notes that it is easy to include an “off” switch in AI systems.
About:
William Uther is a data scientist and software engineer who has worked on a variety of projects in the field of artificial intelligence, including machine learning, natural language processing, and computer vision.Uther is particularly interested in the use of machine learning techniques to analyze and interpret data from complex systems, such as the human brain. He has also been involved in the development of several AI startups, including the healthcare AI company Ambiata.
Source: https://www.lesswrong.com/posts/Jv9kyH5WvqiXifsWJ/q-and-a-with-experts-on-risks-from-ai-3
Keywords: AGI, human, AI systems