This AI Prediction was made by Randal A. Koene, AIDEUS (Alexey Potapov) in 2012.
Predicted time for AGI / HLMI / transformative AI:
(Hover for explanation)Types of advanced artificial intelligence: AGI (AI that can perform many tasks at a human-level), HLMI (more advanced AI that surpasses human intelligence in specific areas), and Transformative AI (AI that could significantly impact society and the world)
Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?Randal Koene: My estimates as of Dec. 2012 are:10% by 202050% by 203590% by 2050AIDEUS (Alexey Potapov): 2025/2035/2050
Opinion about the Intelligence Explosion from Randal A. Koene, AIDEUS (Alexey Potapov):
What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?Randal Koene:Within hours: less than 0.1Within days: less than 0.2Within <5 years: 0.9-1.0[…]AIDEUS (Alexey Potapov): 50%
Flycer’s explanation for better understanding:
Experts predict that there is a 10% chance of developing artificial intelligence (AI) that is as good as humans in science, mathematics, engineering, and programming by 2020, a 50% chance by 2035, and a 90% chance by 2050. There is a high probability of AI self-modifying its way up to vastly superhuman capabilities within less than five years, with a 50% chance according to AIDEUS (Alexey Potapov). However, Randal Koene estimates the probability to be less than 0.1 within hours, less than 0.2 within days, and 0.9-1.0 within less than five years.
The future of humanity with AGI / HLMI / transformative AI:
What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created).Randal Koene: […] So, I think the probability is greater than 0. But beyond that, I don’t have the data to make an estimate that I would want to stand behind in publication. AIDEUS (Alexey Potapov): I think that this question is put in the slightly incorrect form, because singularity will bring drastic changes, and humanity will change within 100 years independent of (unsafe) AI. Biological human extinction will not matter. However, P(humans don’t participate in singularity | AI capable of self-modification and not provably non-friendly is created) = 90%. You can consider this as the answer to your question though.
Flycer’s Secondary Explanation:
Experts were asked about the probability of human extinction within 100 years due to AI capable of self-modification that is not provably non-dangerous. Randal Koene believes the probability is greater than 0 but does not have enough data to make a precise estimate. AIDEUS’s Alexey Potapov believes that singularity will bring drastic changes and biological human extinction will not matter, but the probability of humans not participating in singularity if unsafe AI is created is 90%.
About:
Randal A. Koene is a neuroscientist and AI researcher who has been involved in several projects related to the development of brain-computer interfaces and other technologies that are designed to enhance human cognitive capabilities.
Koene is known for his work on the 2045 Initiative, which is a large-scale research initiative that is focused on developing technologies that can enable humans to achieve digital immortality. He has also been involved in several other projects related to AI and neuroscience, including the development of a brain emulation roadmap and the study of the ethical and social implications of advanced AI technologies.
AIDEUS (Alexey Potapov):
AIDEUS is an AI researcher and the founder of the AIDEUS project, which is a research initiative that is focused on developing advanced AI technologies and applications.
Potapov is known for his work on the development of deep learning algorithms and other advanced AI techniques, and has been involved in several projects related to the use of AI in medicine and healthcare.
He has also been a vocal advocate for the responsible and ethical development of AI technologies, and has spoken and written extensively on the subject.
Source: http://kruel.co/2012/12/17/qa-with-experts-on-risks-from-ai-6/
Keywords: artificial intelligence, self-modification, human extinction