Thomas G. Dietterich and Eric J. Horvitz on 2015

This AI Prediction was made by Thomas G. Dietterich and Eric J. Horvitz in 2015.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Not provided

 

 

Opinion about the Intelligence Explosion from Thomas G. Dietterich and Eric J. Horvitz:

While formal work has not been undertaken to deeply explore [Intelligence explosion] possibility, such a process runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning. However, processes of self-design and optimization might still lead to significant jumps in competencies

 

Flycer’s explanation for better understanding:

The possibility of an Intelligence explosion has not been deeply explored, but it goes against our current understanding of computational complexity limitations on learning and reasoning algorithms. However, self-design and optimization processes could still result in significant competency improvements.

 

 

The future of humanity with AGI / HLMI / transformative AI:

We believe computer scientists must continue to investigate and address concerns about the possibilities of the loss of control of machine intelligence via any pathway, even if we judge the risks to be very small and far in the future. More importantly, we urge the computer science research community to focus intensively on a second class of near-term challenges for AI. These risks are becoming salient as our society comes to rely on autonomous or semiautonomous computer systems to make high-stakes decisions.

 

Flycer’s Secondary Explanation:

Computer scientists should investigate and address concerns about the loss of control of machine intelligence, even if the risks are small and far in the future. The research community should focus on near-term challenges for AI, particularly as society relies more on autonomous or semiautonomous computer systems to make high-stakes decisions.

 

 

About:

Thomas G. Dietterich and Eric J. Horvitz are two renowned figures in the field of computer science and artificial intelligence. Thomas G. Dietterich is a professor emeritus at Oregon State University, where he founded the Machine Learning Group. He has made significant contributions to the development of machine learning algorithms and has been recognized with numerous awards, including the ACM SIGKDD Innovation Award and the AAAI Classic Paper Award. He has also served as the president of the Association for Computational Learning and the founding president of the International Machine Learning Society.Eric J. Horvitz is a technical fellow and director at Microsoft Research, where he leads the Adaptive Systems and Interaction Group. He is a pioneer in the field of probabilistic reasoning and decision-making under uncertainty, and his work has been instrumental in the development of intelligent systems that can reason about complex situations. He has received numerous awards for his contributions to the field, including the ACM-AAAI Allen Newell Award and the IEEE Intelligent Systems’ AI’s 10 to Watch Award.Together, Dietterich and Horvitz have made significant contributions to the field of artificial intelligence, and their work has helped to shape the way we think about intelligent systems and their potential applications. Their research has been instrumental in advancing the field of machine learning and probabilistic reasoning, and their insights continue to inform the development of intelligent systems that can reason about complex situations and make decisions under uncertainty.

 

 

 

 

 

Source: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/CACM_Oct_2015-VP.pdf

 

 

Keywords: