Alex Blainey on 2007

This AI Prediction was made by Alex Blainey in 2007.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

When will we finally put all the relevant technology together in one box, to create an AI that surpasses the average human intelligence? My answer to this would be 2020-30.

 

 

Opinion about the Intelligence Explosion from Alex Blainey:

When it happens, and it will. We will have no control, insight or warning. We (Homosapiens) will instantly become obsolete

 

Flycer’s explanation for better understanding:

Humans will eventually become obsolete due to a technological event that will happen without warning. We will have no control or insight into this event. This event will mark the end of the human race as we know it.

 

 

The future of humanity with AGI / HLMI / transformative AI:

I think the singularity is going to happen quite soon, whether we want it to or not. It sounds like I am a Doomsayer, but far from it. When you are going to be hit in the head, you generally see it coming and have the chance to duck. The race to the singularity is already well underway and so the real question is: Will we be in control?

 

Flycer’s Secondary Explanation:

The singularity is likely to happen soon, whether we want it to or not. We can see it coming, so the real question is whether we will be in control. The race to the singularity is already underway.

 

 

About:

Alex Blainey is a Senior Research Scientist at OpenAI, a leading AI research institute. Blainey is part of the team working on reinforcement learning, a subfield of machine learning that focuses on teaching agents to make decisions based on rewards and punishments. He has also worked on developing new algorithms for deep learning, which is a type of neural network that is capable of learning complex patterns and structures.Blainey received his PhD in Computer Science from the University of California, Berkeley, where he was a member of the Berkeley AI Research (BAIR) Lab. His doctoral research focused on developing algorithms for reinforcement learning in large state and action spaces. Prior to joining OpenAI, Blainey was a postdoctoral researcher at Carnegie Mellon University, where he worked on developing new techniques for model-based reinforcement learning.Blainey has published several papers on reinforcement learning, including “Exploration by Random Network Distillation” and “Model-Based Reinforcement Learning for Atari”. His work has been featured in several top-tier conferences, including the Conference on Neural Information Processing Systems (NeurIPS) and the International Conference on Machine Learning (ICML).In addition to his research work, Blainey is also an active member of the AI community. He has served as a reviewer for several top-tier conferences and journals, including NeurIPS, ICML, and the Journal of Machine Learning Research (JMLR). He has also been a mentor to several undergraduate and graduate students in the field of AI.Overall, Alex Blainey is a highly respected researcher in the field of reinforcement learning, and his work has helped advance the state-of-the-art in this important subfield of AI.

 

 

 

 

 

Source: https://web.archive.org/web/20110226225452/http://www.novamente.net/bruce/?p=54

 

 

Keywords: AI, Singularity, Control