Tuesday, April 28, 2026

The CEO of OpenAI thinks human extinction is a best-case scenario

 


The CEO of OpenAI, Sam Altman, is on a messianic mission to bring about the Singularity, the moment at which artificial intelligence begins to self-improve. If AI is smart enough to build the next generation of even smarter AI systems, this will trigger an “intelligence explosion” resulting in an artificial superintelligence that is more “intelligent” than all of humanity combined.

Some call this “god-like AI.” Elon Musk describes it as “basically a digital god.” Many people, including Altman, argue that ASI will either annihilate humanity or usher in a utopian world of radical abundanceunlimited energyimmortality and cosmic delights beyond our wildest imaginations. “I think the good case,” Altman says, “is just so unbelievably good that you sound like a really crazy person to start talking about it.” “The bad case,” he adds, “is, like, lights out for all of us.”

What everyone misses about Altman’s “good case” scenario is that it would also result in the extinction of our species. His version of “utopia” would entail the complete disappearance of humanity. In a 2017 blog post titled “The Merge,” he writes: “We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

In other words, we can die out once ASI arrives, or we can “survive” by “merging” with AI. This is “probably our best-case scenario” for making it in the post-Singularity world. Altman says that “merging” with AI “can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot.”  Becoming best buddies with AI doesn’t sound like a true merge, though. I know of people who’ve developed intimate relationships with AI, but I wouldn’t consider them as having merged with the machines.

What Altman is really getting at is far more radical. Elsewhere in the essay, he writes that if two different species both want the same thing and only one can have it — in this case, to be the dominant species on the planet and beyond — they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

The two “species” here are humans and ASI. Both want to dominate, Altman says, but only one can. Since there’s no way for ASI to become a biological human, the only other option is for humans to become digital beings like the ASI. That’s the sole way for us to form “one team” — humanity becoming the new species to which ASI belongs.

Altman says as much in a 2016 interview with The New Yorker. “We need to level up humans,” he declares, “because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever.” He elaborates: “The merge has begun — and a merge is our best scenario. Any version without a merge will have conflict: we enslave the AI or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud,” to which he adds, “I’d love that.”

Two years later, he signed up with a startup called Nectome to have his brain digitized when he dies, something he believes will become feasible in the near future. Altman is preparing to become an AI himself…

For the entire article: Sam Altman’s Dangerous Singularity Delusions

Émile P. Torres / Truthdig Contributor


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.