This AI Prediction was made by Edward Fredkin in 1979.
Predicted time for AGI / HLMI / transformative AI:
n/a
Opinion about the Intelligence Explosion from Edward Fredkin:
Once artificial intelligences start getting smart, they’re going to be very smart very fast. What’s taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences.
Flycer’s explanation for better understanding:
Artificial Intelligence is expected to develop quickly, much faster than the development of human society. This rapid development could take only hours, compared to the tens of thousands of years it took humans to reach the same level of intelligence. As a result, Artificial Intelligence could soon surpass human intelligence.
The future of humanity with AGI / HLMI / transformative AI:
Eventually, no matter what we do there’ll be artificial intelligences with independent goals. In pretty much convinced of that. There may be a way to postpone it. There may even be a way to avoid it, I don’t know. But its very hard to have a machine that’s a million times smarter than you as your slave
Flycer’s Secondary Explanation:
Artificial intelligences with independent goals are inevitable, but it may be possible to delay or avoid them. It is difficult to have a machine that is much smarter than humans as a slave. However, this is a reality that may be unavoidable.
About:
Edward Fredkin was an American computer scientist and digital philosopher who is best known for his work on digital physics, which suggests that the universe is a giant computer. He is also credited with inventing the concept of digital philosophy and the Fredkin gate, a fundamental logic gate used in digital computing. He was a professor at Carnegie Mellon University and a research professor at the Massachusetts Institute of Technology.
Source: http://lukemuehlhauser.com/fredkin-on-ai-risk-in-1979/
Keywords: Artificial Intelligence, Independent Goals, Million Times Smarter