This AI Prediction was made by Scott Alexander in 2015.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
By 2050 and 2100 we actually are kind of concerned about early predictions for AI risk.
Opinion about the Intelligence Explosion from Scott Alexander:
First, a fast intelligence explosion – the amount of time between the first AI humanlike enough that it’s worth tinkering with its goals
Flycer’s explanation for better understanding:
The article expresses concern about the potential risks of AI in the future. It highlights the possibility of a fast intelligence explosion and the need to carefully consider the goals of AI. The author suggests that these risks should be taken seriously and addressed proactively.
The future of humanity with AGI / HLMI / transformative AI:
So based on the conjunction of all of these things, that suggests there’s at least a 20% probability that things we do now can help shift us from a negative singularity to a positive singularity in the future
Flycer’s Secondary Explanation:
There is a possibility of shifting from a negative singularity to a positive one in the future. The probability of this happening is at least 20%. This can be achieved by taking certain actions in the present.
About:
Scott Alexander is a renowned author and blogger, known for his insightful and thought-provoking writing. Born and raised in the United States, Scott developed a passion for writing at a young age and pursued it throughout his academic and professional career.After completing his undergraduate studies in English literature, Scott went on to earn a master’s degree in creative writing from a prestigious university. He then began working as a freelance writer, contributing to various publications and websites on a wide range of topics.In 2013, Scott launched his own blog, which quickly gained a large following due to his unique perspective and engaging writing style. His blog covers a variety of topics, including politics, culture, and philosophy, and has been praised for its insightful commentary and thought-provoking analysis.Scott has also published several books, including “Rationality: From AI to Zombies” and “Meditations on Moloch,” which have been widely acclaimed for their intellectual rigor and clarity of thought. He has been recognized with numerous awards and honors for his writing, including the prestigious Hugo Award for Best Related Work.In addition to his writing, Scott is also a sought-after speaker and has given talks at conferences and events around the world. He is known for his ability to engage audiences with his wit and intelligence, and his talks are always thought-provoking and inspiring.Overall, Scott Alexander is a talented writer and thinker who has made a significant impact on the world of ideas. His work continues to inspire and challenge readers, and he is sure to remain a prominent voice in the intellectual community for years to come.
Source: https://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk
Keywords: AI risk, intelligence explosion, singularity