John Smart on 2003

This AI Prediction was made by John Smart in 2003.


Predicted time for AGI / HLMI / transformative AI:

Using this simple model, I feel 68 percent confident that [singularity] will happen between 2040 and 2080, and 95 percent confident it will occur between 2020 and 2100.



Opinion about the Intelligence Explosion from John Smart:

every scenario of “fast takeoff” or A.I. emergence that I’ve ever seen, the heroic individual toiling away in the lab at night to create HAL-9000, just doesn’t seem to understand the immense cycles of replication, variation, interaction, selection, and convergence in evolutionary development


Flycer’s explanation for better understanding:

1. The idea of a single individual creating an artificial intelligence (A.I.) is not realistic.2. The emergence of A.I. is a result of a complex cycle of replication, variation, interaction, selection, and convergence.3. This evolutionary development is necessary for A.I. to emerge.



The future of humanity with AGI / HLMI / transformative AI:

I’m glad to have friends that are carefully exploring this issue, but from my perspective their worries seem both premature and cautiously overstated. I strongly suspect that A.I.s, by virtue of having far greater learning ability than us, will be, must be, far more ethical than us.


Flycer’s Secondary Explanation:

Friends are exploring the potential implications of A.I., but their worries may be exaggerated. A.I.s have greater learning ability than humans, and thus will likely be more ethical. I strongly suspect that A.I.s will be far more ethical than humans.




John Smart is a futurist and founder of the Acceleration Studies Foundation. He is known for his work on emerging technologies, particularly artificial intelligence and nanotechnology, and their potential impact on society. Smart has authored numerous publications on these topics and is a sought-after speaker and consultant.









Keywords: Singularity, A.I., Ethical