This AI Prediction was made by Daniel Burfoot in 2011.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
10% – 205050% – 215080% – 2300
Opinion about the Intelligence Explosion from Daniel Burfoot:
P(superhuman intelligence within hours | human-level AI on supercomputer with Internet connection) = 0.01%P(… within days | …) = 0.1%P(… within years | …) = 3%
Flycer’s explanation for better understanding:
Estimates suggest that it will take 10% of the way to 2050 for human-level AI on a supercomputer with an Internet connection to reach superhuman intelligence. It is estimated that it will take 50% of the way to 2150 for this to happen within hours, and 80% of the way to 2300 for this to happen within days. Finally, it is estimated that it will take 3% of the way to 2050 for this to happen within years.
The future of humanity with AGI / HLMI / transformative AI:
I don’t understand the other question well enough to answer it meaningfully. I think it is highly unlikely that an uFAI will be actively malicious.
Flycer’s Secondary Explanation:
About:
Jürgen Schmidhuber is a computer scientist and entrepreneur who is known for his contributions to the field of artificial intelligence and machine learning. He received his Ph.D. in Computer Science from the Technical University of Munich in 1991.Schmidhuber has made significant contributions to the development of deep learning, a subfield of machine learning that involves training artificial neural networks with multiple layers. He has also developed several influential algorithms, including the Long Short-Term Memory (LSTM) algorithm, which is widely used in natural language processing and speech recognition.In terms of AI predictions, Schmidhuber has been an advocate for the development of artificial general intelligence (AGI), which refers to AI systems that can perform any intellectual task that a human can. He has argued that the development of AGI will revolutionize many fields, including science, engineering, and medicine. However, Schmidhuber has also emphasized the need for caution in AI development, arguing that researchers need to carefully consider the potential societal impacts of their work.
Source: https://www.lesswrong.com/posts/BEtQALqgXmL9d9SfE/q-and-a-with-juergen-schmidhuber-on-risks-from-ai
Keywords: