Victoria Krakovna on 2015

This AI Prediction was made by Victoria Krakovna in 2015.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Not provided

 

 

Opinion about the Intelligence Explosion from Victoria Krakovna:

Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely),

 

Flycer’s explanation for better understanding:

The article argues that the concept of an “intelligence explosion” is not necessary for AI to pose a significant risk to humanity. Instead, the author suggests that even a relatively modest level of AI could cause harm if it is not properly aligned with human values and goals. The article concludes by emphasizing the importance of developing AI in a way that is safe and beneficial for humanity.

 

 

The future of humanity with AGI / HLMI / transformative AI:

 

Flycer’s Secondary Explanation:

 

 

About:

Victoria Krakovna is a renowned researcher and advocate in the field of artificial intelligence (AI) safety. She holds a Bachelor’s degree in Computer Science from Harvard University and a Master’s degree in Computer Science from the Massachusetts Institute of Technology (MIT). Krakovna has made significant contributions to the development of AI safety, particularly in the areas of interpretability and alignment. She has published numerous papers on these topics and has spoken at various conferences and events around the world. In addition to her research work, Krakovna is also a co-founder of the AI safety organization, Future of Life Institute. She has been actively involved in promoting the safe development of AI and has worked with policymakers, industry leaders, and other stakeholders to raise awareness about the potential risks and benefits of AI. Krakovna has received several awards and recognitions for her work, including being named one of Forbes’ 30 Under 30 in Science in 2016. She continues to be a leading voice in the AI safety community and is dedicated to ensuring that AI is developed in a way that benefits humanity.

 

 

 

 

 

Source: https://vkrakovna.wordpress.com/2015/11/29/ai-risk-without-an-intelligence-explosion/

 

 

Keywords: scenario, inevitable, likely