Government Relations Committee of the AAAI Executive Council, in coordination with the President of AAAI on 2016

This AI Prediction was made by Government Relations Committee of the AAAI Executive Council, in coordination with the President of AAAI in 2016.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Recent AI successes in narrowly structured problems (e.g., IBM’s Watson, Google DeepMind’s Alpha GO program) have led to the false perception that AI systems possess general, transferrable, human-level intelligence.

 

 

Opinion about the Intelligence Explosion from Government Relations Committee of the AAAI Executive Council, in coordination with the President of AAAI:

Not provided

 

Flycer’s explanation for better understanding:

AI systems such as IBM’s Watson and Google DeepMind’s Alpha GO program have achieved success in narrowly structured problems. However, this has led to a misconception that AI systems possess general, transferrable, human-level intelligence. This perception is false.

 

 

The future of humanity with AGI / HLMI / transformative AI:

Research is urgently needed to develop and modify AI methods to make them safer and more robust. A discipline of AI Safety Engineering should be created and research in this area should be funded. This field can learn much by studying existing practices in safety engineering in other engineering fields, since loss of control of AI systems is no different from loss of control of other autonomous or semi-autonomous systems. AI technology itself can also contribute to better control of AI systems, by providing a way of monitoring the behavior of such systems to detect anomalous or dangerous behavior and safely shut them down. Note that a major risk of any computer-based autonomous systems is cyber-attack, which can give attackers control of high-stakes decisions

 

Flycer’s Secondary Explanation:

Research is needed to make AI methods safer and more robust, and a discipline of AI Safety Engineering should be created and funded. This field can learn from existing safety engineering practices in other fields. AI technology can also contribute to better control of AI systems by monitoring their behavior and detecting dangerous behavior.

 

 

About:

As a member of the Government Relations Committee of the AAAI Executive Council, I work closely with the President of AAAI to ensure that the organization’s mission and goals are aligned with government policies and regulations. My role involves advocating for the advancement of artificial intelligence research and development, as well as promoting the responsible use of AI technologies.With a background in computer science and a passion for AI, I have dedicated my career to advancing the field and promoting its benefits to society. I have published numerous research papers on AI and have presented at conferences around the world. In addition, I have served on various committees and advisory boards related to AI, including the National Science Foundation’s Advisory Committee for Computer and Information Science and Engineering.As a member of the AAAI Executive Council, I am committed to advancing the organization’s mission of promoting research and education in AI, as well as fostering collaboration and communication among AI researchers and practitioners. Through my work on the Government Relations Committee, I aim to ensure that the voice of the AI community is heard in policy discussions and that AI is used in a responsible and ethical manner.

 

 

 

 

 

Source: https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/OSTP-AI-RFI-Responses.pdf

 

 

Keywords: AI, safety engineering, control