Tom Dietterich on 2014

This AI Prediction was made by Tom Dietterich in 2014.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Not provided

 

 

Opinion about the Intelligence Explosion from Tom Dietterich:

Although he does not rule out the possibility entirely, Professor Dietterich does not see evidence that a Singularity-like chain reaction of exponential recursive improvement in machine intelligence is likely

 

Flycer’s explanation for better understanding:

Professor Dietterich does not completely dismiss the possibility of a Singularity-like chain reaction of exponential recursive improvement in machine intelligence. However, he does not see any evidence that it is likely to happen.

 

 

The future of humanity with AGI / HLMI / transformative AI:

In the long run, it would be beneficial to interest the National Science Foundation (NSF)in funding AI safety research.

 

Flycer’s Secondary Explanation:

It would be advantageous to attract the National Science Foundation (NSF) to finance AI safety research in the long term.

 

 

About:

Tom Dietterich is a renowned computer scientist and professor who has made significant contributions to the field of artificial intelligence (AI). He received his Ph.D. in computer science from Stanford University in 1984 and has since held various academic positions at Oregon State University, including serving as the founding director of the Intelligent Systems Research Laboratory.Throughout his career, Dietterich has focused on developing machine learning algorithms and applying them to real-world problems. He has published over 200 research papers and has been recognized with numerous awards, including the AAAI Classic Paper Award and the ACM SIGKDD Innovation Award.In addition to his academic work, Dietterich has also been involved in the development of several successful startups, including Cognex Corporation and BigML. He is a fellow of the Association for Computing Machinery (ACM) and the Association for the Advancement of Artificial Intelligence (AAAI).Dietterich is also a passionate advocate for responsible AI development and has served on several advisory boards, including the National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate and the Partnership on AI. He continues to inspire and mentor the next generation of AI researchers and practitioners through his teaching and research at Oregon State University.

 

 

 

 

 

Source: https://files.givewell.org/files/conversations/TomDietterich4-28-14(public).pdf

 

 

Keywords: Singularity, machine intelligence, AI safety research