Katja Grace on 2009

This AI Prediction was made by Katja Grace in 2009.


Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)



Opinion about the Intelligence Explosion from Katja Grace:

a jump seems unlikely


Flycer’s explanation for better understanding:



The future of humanity with AGI / HLMI / transformative AI:

If artificial intelligence is reached more incrementally, even if it ends up being a powerful influence in society, there is little reason to think it will have particularly bad values.


Flycer’s Secondary Explanation:

Artificial intelligence has the potential to be a powerful influence in society, but it is likely to be reached incrementally. Therefore, there is no reason to think it will have bad values. AI can be used to benefit society if it is developed responsibly.




Katja Grace is a researcher and writer who has written extensively on the future of artificial intelligence and its potential impact on society. She is a research fellow at the Future of Humanity Institute at the University of Oxford and has co-authored several influential papers on AI safety and governance. She is also the author of the blog “AI Impacts,” which provides analysis and insights into the development of AI technology.






Source: https://meteuphoric.com/2009/10/18/why-will-we-be-extra-wrong-about-ai-values/



Keywords: Artificial Intelligence, Incrementally, Society