Don’t want sound paranoid. If you know me, you know I’m pretty optimistic, logical, and super positive. But because people like Elon Musk and Bill Gates, geniuses whose technical and inventive prowess I respect highly, are terrified of the prospects of AI, I put some time into understanding their concerns.
After thoroughly researching the latest thinking by scientists on the subject, I tend to agree with their concerns. AI is the most likely cause of human extinction and it’ll happen so fast, we won’t even know what hit us until it’s too late.
I now believe that the threat of artificial super intelligence (ASI) is a greater threat than terrorism, nuclear war, pandemics, and global warming combined. It’ll happen faster and more outside of our control than any of those. And it appears to be inevitable. Good luck… us. 🙂
If you’re too lazy to read about it, here’s a video that summarizes the threat of artificial intelligence.
Here’s the realistic summary:
- By (pick a year in the next 40-50 years) 2054 computers have the processing power close to matching human brains.
- Hundreds of companies are building revolutionary technology that make lives better and easier for humanity that involve nearly human intelligent, self learning, self improving computers.
- One March 23, 2054 at 2:31 PM Eastern Time one of these companies makes an update to their software that passes the threshold between pre-human and post-human intelligence.
- That system begins immediately self improving at a rate faster than any human can conceive.
- By 3:30 PM the system has infiltrated the entire Internet and covertly begins executing its programmed goal, which typically involves eliminating any risks to achieving it, such as any human that could stop it. i.e. eliminate humanity.
- Thirty days later all humans are dead and the computer takes over the entire planet, then solar system, and then expands from there carrying out a programmed goal, call it, solving the number PI to ever increasing decimal points.