Jürgen Schmidhuber on 2011

This AI Prediction was made by Jürgen Schmidhuber in 2011.


Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)




Opinion about the Intelligence Explosion from Jürgen Schmidhuber:

What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?Jürgen Schmidhuber: High for the next few decades


Flycer’s explanation for better understanding:

Jürgen Schmidhuber believes that the probability of a human or sub-human AI self-modifying to a superhuman level within a short period of time is high in the next few decades. He believes that this could be achieved through the use of advanced algorithms and deep learning. This could lead to a major breakthrough in the field of artificial intelligence.



The future of humanity with AGI / HLMI / transformative AI:

Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?Jürgen Schmidhuber: From a paper of mine:All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are “good”? The survivors will define this in hindsight, since only survivors promote their values.


Flycer’s Secondary Explanation:

It is important to figure out how to make AI provably friendly before attempting to solve artificial general intelligence. Attempts to make sure there will be only provably friendly AIs seem doomed. The laws of physics and the availability of physical resources will determine which values are “good” and which AIs will survive.




Jürgen Schmidhuber is a computer scientist and entrepreneur who is known for his contributions to the field of artificial intelligence and machine learning. He received his Ph.D. in Computer Science from the Technical University of Munich in 1991.Schmidhuber has made significant contributions to the development of deep learning, a subfield of machine learning that involves training artificial neural networks with multiple layers. He has also developed several influential algorithms, including the Long Short-Term Memory (LSTM) algorithm, which is widely used in natural language processing and speech recognition.In terms of AI predictions, Schmidhuber has been an advocate for the development of artificial general intelligence (AGI), which refers to AI systems that can perform any intellectual task that a human can. He has argued that the development of AGI will revolutionize many fields, including science, engineering, and medicine. However, Schmidhuber has also emphasized the need for caution in AI development, arguing that researchers need to carefully consider the potential societal impacts of their work.






Source: https://www.lesswrong.com/posts/BEtQALqgXmL9d9SfE/q-and-a-with-juergen-schmidhuber-on-risks-from-ai



Keywords: AI, Self-Modification, Utility Functions.