Thomas Hills on 2007

This AI Prediction was made by Thomas Hills in 2007.


Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Or when will AI get an agenda and begin to pose a problem for what we can and cannot control in our creations? When will we stop being symbiotic with AI? I’m guessing all of the latter of these are really far away, if ever.



Opinion about the Intelligence Explosion from Thomas Hills:


Flycer’s explanation for better understanding:

AI poses a potential problem for what we can and cannot control in our creations. It is unclear when AI will stop being symbiotic with humans. It is likely that these issues are far away, if ever.



The future of humanity with AGI / HLMI / transformative AI:


Flycer’s Secondary Explanation:




Thomas Hills is a professor of psychology and co-director of the Behavioral and Brain Sciences Unit at the University of Warwick. His research focuses on the intersection of cognitive psychology, neuroscience, and computational modeling, with an emphasis on how humans learn and make decisions. Hills has authored numerous articles and books on these topics, including the influential textbook “Bayesian Methods for Hackers.” He is also a Fellow of the Association for Psychological Science and a member of the Experimental Psychology Society.









Keywords: AI, Agenda, Control