This AI Prediction was made by Anthony Aguirre in 2015.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
I would assign a probability of ~ 1% for AGI arising in the next ten years, and ~ 10% over the next thirty years. (This essentially reflects a probability that my analysis is wrong, times a probability more representative of AI experts who—albeit with lots of variation—tend to assign somewhat higher numbers.)
Opinion about the Intelligence Explosion from Anthony Aguirre:
While I largely agree, I’d add the caveat that it’s quite possible that progress will ‘stall’ for a while at the near-human level until something cognitively stable can be developed, or that the AGI, even if somewhat unstable, must still be high-functioning enough to self-improve its intelligence.
Flycer’s explanation for better understanding:
The probability of AGI arising in the next ten years is around 1%, and around 10% over the next thirty years. This reflects a probability that the analysis is wrong, and AI experts tend to assign somewhat higher numbers. Progress may stall at the near-human level until something cognitively stable can be developed, or the AGI must still be high-functioning enough to self-improve its intelligence.
The future of humanity with AGI / HLMI / transformative AI:
So even if AGI is a long way away, I’m deeply pessimistic about what will happen ‘by default’ if we get it. […] But when you’re talking about something that could radically determine the future (or future existence of) humanity, 75% confidence is not enough. 90% is not enough. 99% is not enough! We would never have built the LHC if there was a 1% (let alone 10%) chance of it actually spawning black holes
Flycer’s Secondary Explanation:
The author is pessimistic about the potential consequences of achieving AGI. They argue that even a 99% confidence level is not enough when dealing with something that could drastically impact humanity’s future. The author uses the example of the LHC, which would not have been built if there was even a 1% chance of it creating black holes.
About:
Anthony Aguirre is a renowned physicist and cosmologist who has made significant contributions to the field of theoretical physics. He was born in 1970 in California, USA, and grew up in a family of scientists. Aguirre received his undergraduate degree in physics from the University of California, Berkeley, and went on to earn his Ph.D. in astrophysics from Harvard University.Aguirre’s research focuses on the study of the early universe, the nature of dark matter and dark energy, and the possibility of multiple universes. He has published numerous papers in prestigious scientific journals and has been recognized with several awards for his contributions to the field of physics.In addition to his research, Aguirre is also a dedicated educator and mentor. He has taught at several universities, including the University of California, Santa Cruz, and the University of California, San Diego. He has also mentored many students and young scientists, inspiring them to pursue careers in physics and cosmology.Aguirre is a fellow of the American Physical Society and a member of the American Astronomical Society. He is also a co-founder of the Future of Life Institute, a non-profit organization that aims to promote the responsible use of technology and prevent the risks associated with emerging technologies.Overall, Anthony Aguirre is a highly respected physicist and cosmologist who has made significant contributions to our understanding of the universe. His research and dedication to education and mentorship have inspired many young scientists and will continue to shape the field of physics for years to come.
Source: https://www.edge.org/responses/q2015
Keywords: