This AI Prediction was made by Luke Muehlhauser in 2014.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Not provided
Opinion about the Intelligence Explosion from Luke Muehlhauser:
I immediately realized that Good’s conclusion followed directly from things I already believed, for example that intelligence is a product of cognitive algorithms, not magic.
Flycer’s explanation for better understanding:
The author agreed with Good’s conclusion. The author believed that intelligence is a result of cognitive algorithms, not magic. The author realized that Good’s conclusion was in line with their own beliefs.
The future of humanity with AGI / HLMI / transformative AI:
As for myself, I’m pretty pessimistic. The superintelligence control problem looks much harder to solve than, say, the global risks from global warming or synthetic biology, and I don’t think our civilization’s competence and rationality are improving quickly enough for us to be able to solve the problem before the first machine superintelligence is built.
Flycer’s Secondary Explanation:
The author is pessimistic about the superintelligence control problem. They believe it is harder to solve than other global risks and that our civilization’s competence and rationality are not improving quickly enough to solve it before the first machine superintelligence is built.
About:
Luke Muehlhauser is a prominent figure in the field of artificial intelligence and effective altruism. He is a researcher, writer, and speaker who has dedicated his career to exploring the intersection of technology and ethics.Muehlhauser began his academic journey at the University of Washington, where he earned a Bachelor of Science in mathematics and philosophy. He went on to pursue a Master of Arts in philosophy at the University of Wisconsin-Madison, where he focused on epistemology and philosophy of science.After completing his graduate studies, Muehlhauser worked as a research fellow at the Singularity Institute for Artificial Intelligence (now known as the Machine Intelligence Research Institute). During his time there, he co-authored several influential papers on the risks and benefits of advanced artificial intelligence.In 2011, Muehlhauser co-founded the blog LessWrong, which quickly became a hub for discussions on rationality, cognitive science, and effective altruism. He also served as the executive director of the Machine Intelligence Research Institute from 2013 to 2015.Muehlhauser has written extensively on topics such as existential risk, artificial intelligence, and effective altruism. His work has been featured in publications such as The New York Times, The Guardian, and The Atlantic.Today, Muehlhauser continues to be a leading voice in the field of artificial intelligence and ethics. He is the executive director of the Open Philanthropy Project, a research and grantmaking organization that aims to identify the most effective ways to improve the world.
Source: https://io9.gizmodo.com/can-we-build-an-artificial-superintelligence-that-wont-1501869007
Keywords: superintelligence, control problem, cognitive algorithms