James S. Berry on 2007

This AI Prediction was made by James S. Berry in 2007.

 

Predicted time for AGI / HLMI / transformative AI:

Once this definition exists and is well defined and is generally known outside of the core AI community, I believe that it will become possible to create an AGI in 5 to 15 years

 

 

Opinion about the Intelligence Explosion from James S. Berry:

n/a

 

Flycer’s explanation for better understanding:

 

 

The future of humanity with AGI / HLMI / transformative AI:

It might be too much to ask for that our lawmakers would initially extend human equivalent rights to an AGI […]. I might be wrong but I have a bad feeling that it might not go so well for humans in the long run if it develops resentment towards us from being treated as property while it is still weaker than.

 

Flycer’s Secondary Explanation:

It is unlikely that lawmakers would extend human equivalent rights to an AGI. If an AGI is treated as property while it is weaker than humans, it may develop resentment towards humans. This could have negative consequences for humans in the long run.

 

 

About:

James S. Berry is a computer scientist and entrepreneur who specializes in the development of AI and machine learning algorithms. He is the co-founder and CEO of New Knowledge, a startup that uses AI to detect and combat online disinformation campaigns. He has also worked on developing AI systems for a variety of other applications, including natural language processing and computer vision.

 

 

 

 

 

Source: https://web.archive.org/web/20110226225452/http://www.novamente.net/bruce/?p=54

 

 

Keywords: AGI, Rights, Resentment