David Chalmers on 2010

This AI Prediction was made by David Chalmers in 2010.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Nevertheless, my credence that there will be human-level AI before 2100 is somewhere over one-half.

 

 

Opinion about the Intelligence Explosion from David Chalmers:

Given the way that computer technology always advances, it is natural enough to think that once there is AI, AI will be just around the corner. And the argument for the intelligence explosion suggests a rapid step from AI to AI soon after that. I think it would not be unreasonable to suggest “within years” here (and some would suggest “within days” or even sooner for the second step), but as before “within decades” is conservative while still being interesting

 

Flycer’s explanation for better understanding:

It is reasonable to believe that human-level AI will be achieved before 2100. Once AI is achieved, AI will follow shortly after, and AI will follow shortly after that. The second step could take days or even sooner, but the conservative estimate is within decades.

 

 

The future of humanity with AGI / HLMI / transformative AI:

If there is a singularity, it will be one of the most important events in the history of the planet. An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet. So if there is even a small chance that there will be a singularity, we would do well to think about what forms it might take and whether there is anything we can do to influence the outcomes in a positive direction

 

Flycer’s Secondary Explanation:

A singularity could be one of the most important events in history. It could bring great benefits, but also great dangers. Therefore, it is important to consider what form it might take and how to influence it in a positive direction.

 

 

About:

David Chalmers is a philosopher and cognitive scientist who is widely known for his contributions to the philosophy of consciousness and the philosophy of mind. Born in Australia in 1966, Chalmers earned his PhD in philosophy at Indiana University in 1993. He has held faculty positions at the University of Arizona, the Australian National University, and New York University, where he is currently a professor of philosophy.Chalmers is particularly well-known for his formulation of the “hard problem of consciousness,” which is the challenge of explaining subjective experience in purely physical terms. He has also written extensively on the concept of a “zombie,” which is a hypothetical being that behaves in all respects like a human being but lacks consciousness.In addition to his work on consciousness, Chalmers has also written on a wide range of other topics, including the philosophy of language, metaphysics, and epistemology. He is the author of several books, including “The Conscious Mind” (1996), “Constructing the World” (2012), and “Philosophy of Mind: Classical and Contemporary Readings” (2013).

 

 

 

 

 

Source: http://www.nyu.edu/gsas/dept/philo/faculty/block/M&L2010/Papers/Chalmers.pdf

 

 

Keywords: AI, Singularity, Intelligence Explosion