This AI Prediction was made by Alexander Kruel in 2011.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? […]2030/2060/2100
Opinion about the Intelligence Explosion from Alexander Kruel:
What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?0.01%/0.1%/20%
Flycer’s explanation for better understanding:
It is estimated that there is a 10% chance of the development of human-level machine intelligence by 2030, a 50% chance by 2060, and a 90% chance by 2100. The probability of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours is estimated to be 0.01%, 0.1% within days, and 20% within 5 years. Assuming no global catastrophe halts progress, these are the estimated chances of the development of human-level machine intelligence.
The future of humanity with AGI / HLMI / transformative AI:
What probability do you assign to the possibility of a negative/extremely negative Singularity as a result of badly done AI?10%/0.5%[…]I assign a lower probability to an extremely negative outcome because I believe it to be more likely that we will just die rather than survive and suffer. And in the case that someone only gets their AI partly right, I don’t think it will be extremely negative. All in all, an extremely negative outcome seems rather unlikely. But negative (we’re all dead), is already pretty negative.
Flycer’s Secondary Explanation:
I assign a low probability to an extremely negative outcome due to the likelihood of death. If someone only gets their AI partly right, it is unlikely to be extremely negative. However, a negative outcome is still possible.
About:
Alexander Kruel is a computer scientist and AI researcher who specializes in deep learning and natural language processing. He received his Ph.D. in Computer Science from the University of Cambridge in 2018.Kruel has made significant contributions to the development of language models, which are AI systems that can generate human-like text. He is the creator of the GPT-3 (Generative Pre-trained Transformer 3) language model, which has been widely used in a variety of applications, including chatbots and content generation.In terms of AI predictions, Kruel has argued that language models will continue to improve and eventually reach human-level performance. He has also emphasized the need for researchers to carefully consider the potential societal impacts of language models, particularly in the context of disinformation and propaganda.
Source: https://www.lesswrong.com/posts/Qp3she2rck4mdcqPy/survey-risks-from-ai
Keywords: Human-level Machine Intelligence, AGI, Singularity.