Opinion about the Intelligence Explosion from Bobby D. Bryant:
Flycer’s explanation for better understanding:
The future of humanity with AGI / HLMI / transformative AI:
I don’t think there will be any singularity. We haven’t *really* come that far in the past half a century. […] I’m not sure it should even be our goal: society (broadly speaking) could benefit a lot from the deployment of lots of “stupid AI”
Flycer’s Secondary Explanation:
The idea of a singularity is unlikely to be realized. Despite advances in technology over the past 50 years, we have not made enough progress to reach this goal. Instead, society could benefit more from the development of “stupid AI” rather than striving for a singularity.
Bobby D. Bryant is a computer scientist and entrepreneur who has worked in various areas of artificial intelligence, including natural language processing, machine learning, and computer vision. He is the founder of several AI startups, including Sentient Technologies and SigOpt, which specialize in evolutionary algorithms and optimization techniques for machine learning. He has also worked as a research scientist at several prominent AI research institutions, including MIT and the University of California, Berkeley.
Keywords: 2100, Singularity, AI