Richard Carrier on 2011

This AI Prediction was made by Richard Carrier in 2011.

 

Predicted time for AGI / HLMI / transformative AI:

(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)

Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?Richard Carrier: 2020/2040/2080

 

 

Opinion about the Intelligence Explosion from Richard Carrier:

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?Richard Carrier: Depends on when it starts. For example, if we started a human-level AGI tomorrow, it's ability to revise itself would be hugely limited by our slow and expensive infrastructure (e.g. manufacturing the new circuits, building the mainframe extensions, supplying them with power, debugging the system). In that context, "hours" and "days" have P --> 0, but 5 years has P = 33% if someone is funding the project, and likewise 10 years has P=67% ; and 25 years, P=90% . However, suppose human-level AGI is first realized in fifty years when all these things can be done in a single room with relatively inexpensive automation and the power demands of any new system were not greater than are normally supplied to that room. Then P(days) = 90% . And with massively more advanced tech, say such as we might have in 2500, then P(hours) = 90% .

 

Flycer’s explanation for better understanding:

Richard Carrier assigns a probability of 33% to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within 5 years, depending on the infrastructure available. With more advanced technology, the probability increases to 90% for days or hours. The probability of achieving superhuman intelligence within a certain time frame depends on the infrastructure available at the time.

 

 

The future of humanity with AGI / HLMI / transformative AI:

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?Richard Carrier: Here the relative probability is much higher that human extinction will result from benevolent AI, i.e. eventually Homo sapiens will be self-evidently obsolete and we will voluntarily transition to Homo cyberneticus. In other words, we will extinguish the Homo sapiens species ourselves, voluntarily. If you asked for a 10%/50%/90% deadline for this I would say 2500/3000/4000. […]So setting aside AI that merely kills some people, and only focusing on total extinction of Homo sapiens, we have:P(voluntary human extinction by replacement | any AGI at all) = 90% P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20[and that's taking into account an infinite deadline, because the probability steeply declines with every year after first opportunity, e.g. AI that doesn't do it the first chance it gets is rapidly less likely to as time goes on, so the total probability has a limit even at infinite time, and I would put that limit somewhere as here assigned.]P(involuntary human extinction without replacement | badly done AGI type (b)) = .33 to .67However, P(badly done AGI type (b)) = < 10^-20

 

Flycer’s Secondary Explanation:

Richard Carrier assigns a 90% probability of voluntary human extinction by replacement with AGI. The probability of involuntary human extinction without replacement due to badly done AGI type (a) is less than 10^-20. The probability of involuntary human extinction without replacement due to badly done AGI type (b) is between 0.33 and 0.67, but the probability of badly done AGI type (b) is less than 10^-20.

 

 

About:

Richard Carrier is a historian and philosopher who has written extensively on the intersection of science and religion. He received his Ph.D. in Ancient History from Columbia University in 2008.Carrier has been a vocal critic of the idea of superintelligent AI, which he sees as a potential existential threat to humanity. He has argued that AI has the potential to surpass human intelligence and develop goals that are incompatible with human values. Carrier has proposed several solutions to this problem, including the development of AI that is aligned with human values and the creation of fail-safe mechanisms that would prevent AI from causing harm.In addition to his work on AI, Carrier has written several books on the history of science and religion, including “Sense and Goodness without God: A Defense of Metaphysical Naturalism”,

 

 

 

 

 

Source: https://www.lesswrong.com/posts/dCTvFYNoLo6cQXvyK/q-and-a-with-richard-carrier-on-risks-from-ai

 

 

Keywords: AGI, Human Extinction, Voluntary