This AI Prediction was made by Nick Bostrom in 2014.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
It seems somewhat likely that it will happen sometime in this century, but we don’t know for sure.
Opinion about the Intelligence Explosion from Nick Bostrom:
Since there is an especially strong prospect of explosive growth just after the crossover point, when the strong positive feedback loop of optimization power kicks in, a scenario of this kind is a serious possibility
Flycer’s explanation for better understanding:
The occurrence of explosive growth in this century is a possibility, but uncertain. The strong positive feedback loop of optimization power could lead to explosive growth just after the crossover point. This scenario is a serious possibility.
The future of humanity with AGI / HLMI / transformative AI:
If we are threatened with existential catastrophe as the default outcome of an intelligence explosion, our thinking must immediately turn to the search for countermeasures.
Flycer’s Secondary Explanation:
The possibility of an intelligence explosion leading to an existential catastrophe requires us to search for countermeasures. This should be done immediately. Our thinking must be focused on finding ways to prevent such a catastrophe.
About:
Nick Bostrom is a Swedish philosopher and futurist, widely recognized for his contributions to the field of existential risk and the study of artificial intelligence. Born in Helsingborg, Sweden in 1973, Bostrom received his undergraduate degree in philosophy, mathematics, and physics from the University of Gothenburg. He then went on to earn a Ph.D. in philosophy from the London School of Economics.Bostrom is the founding director of the Future of Humanity Institute at the University of Oxford, where he also serves as a professor of philosophy. He is also the director of the Strategic Artificial Intelligence Research Centre and a fellow of the Oxford Martin School. In addition to his academic work, Bostrom is a prolific author, having written numerous articles and books on topics ranging from the ethics of artificial intelligence to the potential risks posed by emerging technologies.Bostrom’s work has been widely recognized and has earned him numerous awards and honors. In 2009, he was named one of the Top 100 Global Thinkers by Foreign Policy magazine, and in 2013 he was awarded the Eugene R. Gannon Jr. Award for the Continued Pursuit of Human Advancement. He has also been a TED speaker and has given talks at numerous conferences and events around the world.Bostrom’s research focuses on the potential risks posed by emerging technologies, particularly artificial intelligence, and the ways in which we can mitigate those risks. He has argued that the development of superintelligent AI could pose an existential threat to humanity, and has called for increased research and regulation in this area. He has also written extensively on the ethics of AI, including the question of whether machines can be conscious and the implications of AI for human values and decision-making.Overall, Bostrom’s work has had a significant impact on the field of philosophy and the study of emerging technologies. His insights and ideas have helped to shape the way we think about the future of humanity and the role that technology will play in that future.
Source: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742
Keywords: intelligence explosion, optimization power, existential catastrophe