This AI Prediction was made by Steve Omohundro in 2011.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
n/a
Opinion about the Intelligence Explosion from Steve Omohundro:
I believe either scenario is technologically possible. But I think slower development would be preferable.
Flycer’s explanation for better understanding:
Not provided
The future of humanity with AGI / HLMI / transformative AI:
I don’t think that rapidly bringing about AGI is the best initial goal. I would feel much better about it if we had a clear roadmap for how these systems will be safely integrated into society for the benefit of humanity. […] I believe the best approach will be to develop provably limited systems and to use those in designing more powerful ones that will have a beneficial impact.
Flycer’s Secondary Explanation:
Rapidly bringing about AGI is not the best initial goal. A clear roadmap for safely integrating these systems into society is needed. The best approach is to develop provably limited systems and use them to design more powerful ones with a beneficial impact.
About:
Steve Omohundro is a computer scientist and mathematician who has made significant contributions to the fields of artificial intelligence and robotics. He is the founder of the Artificial General Intelligence Research Institute (AGIRI) and currently serves as its president.Omohundro’s work on AI has focused on developing advanced algorithms for machine learning and intelligent decision-making. He has also explored the ethical and societal implications of advanced AI systems, particularly with respect to issues of safety and control.Omohundro has published numerous articles and papers on topics such as machine learning, robotics, and the ethics of AI. He has also served on several advisory boards and working groups related to AI, including the Future of Life Institute’s AI Safety Group and the Machine Intelligence Research Institute.
Source: https://selfawaresystems.com/2011/07/29/next-big-future-interview-steve-omohundro-and-the-future-of-superintelligence/
Keywords: scenario, AGI, beneficial impact