[Nick] Bostrom, Katja [Grace ?], Toby [Ord ?], [Nick] Beckstead, Anders [Sandberg ?], Stuart [Armstrong ?], Jaan [Tallinn ?], Owen [Cotton-Barratt ?] on 2014

This AI Prediction was made by [Nick] Bostrom, Katja [Grace ?], Toby [Ord ?], [Nick] Beckstead, Anders [Sandberg ?], Stuart [Armstrong ?], Jaan [Tallinn ?], Owen [Cotton-Barratt ?] in 2014.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Your credence that AGI is developed by 2050 (on Earth):Bostrom: ~25% Katja: ~35% Toby: ~17% Anders: ~25% Stuart: 20% Jaan: ~25% Owen: ~25%

 

 

Opinion about the Intelligence Explosion from [Nick] Bostrom, Katja [Grace ?], Toby [Ord ?], [Nick] Beckstead, Anders [Sandberg ?], Stuart [Armstrong ?], Jaan [Tallinn ?], Owen [Cotton-Barratt ?]:

Not provided

 

Flycer’s explanation for better understanding:

 

 

The future of humanity with AGI / HLMI / transformative AI:

Humanity goes extinct in the next 100 years — replacing us with something better (e.g. WBE) doesn’t count:Bostrom: 18%Katja: ~5%Toby: 10ckstead: ~7%Anders: ~10%Stuart: ~22% [for AI]Jaan: ~30%

 

Flycer’s Secondary Explanation:

Several experts have predicted the likelihood of human extinction within the next 100 years. The estimates range from 5% to 30%, with Stuart giving the highest probability for AI-induced extinction at 22%. Replacing humans with something better, such as whole brain emulation, was not considered in these predictions.

 

 

About:

Nick Bostrom is a Swedish philosopher and futurist, known for his work on existential risks, superintelligence, and the simulation hypothesis. He is the founding director of the Future of Humanity Institute at the University of Oxford, where he also holds a professorship in the Faculty of Philosophy. Bostrom has published numerous articles and books on topics related to the future of humanity, including “Superintelligence: Paths, Dangers, Strategies” and “Global Catastrophic Risks”.Katja Grace is a researcher at the Future of Humanity Institute, where she focuses on forecasting and evaluating the long-term impacts of emerging technologies. She is also the co-founder of the AI Impacts project, which aims to track the progress of artificial intelligence and its potential impact on society. Grace has published several papers on topics such as AI timelines, AI safety, and the ethics of AI.Toby Ord is a philosopher and researcher at the Future of Humanity Institute, where he works on existential risks, global priorities research, and effective altruism. He is also a co-founder of the effective altruism movement and the founder of the Giving What We Can charity. Ord has published several papers and books on topics such as existential risk, moral uncertainty, and the ethics of global priorities.Nick Beckstead is a researcher at the Open Philanthropy Project, where he focuses on global catastrophic risks and effective altruism. He previously worked at the Future of Humanity Institute, where he co-authored several papers on topics such as existential risk, moral uncertainty, and the long-term future of humanity.Anders Sandberg is a Swedish transhumanist and researcher at the Future of Humanity Institute, where he works on topics such as cognitive enhancement, existential risks, and the ethics of emerging technologies. He is also a co-founder of the Oxford Transhumanism and Emerging Technologies group and the co-author of “The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future”.Stuart Armstrong is a researcher at the Future of Humanity Institute, where he works on topics such as decision theory, AI safety, and the long-term future of humanity. He is also a co-founder of the AI Impacts project and the author of several papers on topics such as the control problem, the value alignment problem, and the ethics of AI.Jaan Tallinn is an Estonian computer programmer and entrepreneur, known for his work on Skype, Kazaa, and the Centre for the Study of Existential Risk. He is also a co-founder of the Future of Life Institute and the co-author of the Asilomar AI Principles, a set of guidelines for the safe development of artificial intelligence.Owen Cotton-Barratt is a researcher at the Future of Humanity Institute, where he works on topics such as global priorities research, existential risks, and the ethics of emerging technologies. He is also a co-founder of the Global Priorities Institute and the author of several papers on topics such as moral uncertainty, the long-term future of humanity, and the ethics of AI.

 

 

 

 

 

Source: https://www.flickr.com/photos/arenamontanus/14427926005/in/photolist-axC1R7-5LFwwU-7hQJqk-9bt9At-pY8ypH-nYWTJV-hS8HiV-kDqhCb-cxqxN1-cxqrZJ-pBqeDj-odfdF2-4DqCXj-f3rfff-mPsvky-6qqYwu-cSuQJu-c4jqZ5-6Jaj5R-9VYgo7-jzAnZC-gtTN7P-uZ3z1K-vHKd3U-qqsUqQ-7cUu4M/

 

 

Keywords: AGI, developed, 2050