“At a birthday party for Elon Musk in northern California wine country, late at night after cocktails, he and longtime friend Larry Page fell into an argument about the safety of artificial intelligence. There was nothing obvious to be concerned about at the time -- it was 2015, seven years before the release of ChatGPT. State-of-the-art AI models, playing games and recognizing dogs and cats, weren't much of a threat to humankind. But Musk was worried.
Page, then CEO of Google parent company Alphabet, pushed back.
MIT professor Max Tegmark, a guest at the party, recounted in his 2017 book "Life 3.0" that Page made a "passionate" argument for the idea that "digital life is the natural and desirable next step" in "cosmic evolution." Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win.
That, Musk responded, would be a formula for the doom of humanity. For the sin of placing humans over silicon-based life-forms, Page denigrated Musk as a "specieist" -- someone who assumes the moral superiority of his own species. Musk happily accepted the label. (Page did not respond to requests for comment.)
As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics.
I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science. Sutton wrote:
The argument for fear of AI appears to be:
1. AI scientists are trying to make entities that are smarter than current people.
2. If these entities are smarter than people, then they may become powerful.
3. That would be really bad, something greatly to be feared, an 'existential risk.
The first two steps are clearly true, but the last one is not. Why shouldn't those who are the smartest become powerful?
This, for me, was something new. I was used to thinking of AI leaders and researchers in terms of two camps: on one hand, optimists who believed it's no problem to "align" AI models with human interests, and on the other, doomers who wanted to call a time-out before wayward super-intelligent AIs exterminate us. Now here was this third type of person, asking, what's the big deal, anyway?
In the field of AI research, the level of risk is commonly expressed as "p(doom)," that is, the probability of AI-driven doom for humankind. In 2023, a survey run by the nonprofit AI Impacts asked AI researchers for their estimates of p(doom) -- what probability they placed on "future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species." Almost half of the 1,300 respondents to the question gave a probability of 10% or higher. The average was 16%, or around one chance in six -- Russian roulette odds. These figures are in line with off-the-cuff estimates from Musk, Anthropic CEO Dario Amodei, and Yoshua Bengio, a key contributor to the foundations of modern AI.
I selfishly prefer having humans at the apex, since I'm human myself on my good days, so I wanted to learn more about why people should learn to accept AI doom. Sutton told me AIs are different from other human inventions in that they're analogous to children.
"When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them."
But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that?
"I don't think there's anything sacred about human DNA," Sutton said. "There are many species -- most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that."
And when that day comes? Goodbye, Homo sapiens?
"If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK." OK, that is, for the AIs to rid the universe of us, one way or another.
I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist, and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species.
He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)
"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way."
We should get out of the way, that is, because it's unjust to favor humans -- and because consciousness in the universe will be superior if AIs supplant us.
"The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore."
The closest thing to a founding document for the Cheerful Apocalyptics is "Mind Children," a 1988 book by Carnegie Mellon roboticist Hans Moravec. The title expresses the idea that intelligent robots would, in concept, be our children, and, in what he regarded as a happy outcome, would eventually replace us.
Moravec, who had a self-described obsession with artificial life, viewed human minds as simply a collection of data; he envisioned that in some cases, a robot's mind would simply be a digital copy of a biological person's mind -- achieved through a process of uploading that he called "transmigration."
These ideas were later elaborated by the technologist Ray Kurzweil and the science-fiction writer Vernor Vinge. Kurzweil added a touch of romance to the story, predicting that posthuman nanobots, unhindered by human chauvinism, would spread across star systems.
Exactly how the extinction of humanity would come about is radically unknowable, say the Cheerful Apocalyptics. Once AIs are able to apply their intelligence to designing their next generations, their capabilities will skyrocket, leaving humans as the equivalent of mollusks in comparison. I.J. Good, a former Bletchley Park codebreaker turned AI researcher, foresaw this scenario in the 1960s, calling it an "intelligence explosion." At that point, humanity would be powerless against the wishes of AIs, which would have their own goals, whether hostile to us or simply wanting to use our resources toward some other priority.
At this point, you may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad -- right?
What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes.
Much as Darwin needed a popularizer -- Thomas Huxley, known as "Darwin's bulldog" -- for his ideas to reach wider discourse, the Cheerful Apocalyptics have their popularizer in Daniel Faggella. He's an AI autodidact who uses his podcast, blog and conferences to promote the idea of bringing about a "worthy successor" to humankind.
"The eternal locus of all moral value and volition until the heat death of the universe will not be f -- -- ing opposable thumbs," he told me. "I'm not sure opposable thumbs are steering the ship in, like, 20 years."
What Faggella has in common with some advocates of restrictions on AI is that, while he's OK with AI replacing humans, he doesn't want it to happen too quickly. Policymakers should try to stave it off until AIs are "worthy" -- that is, until they can carry the torch of consciousness. He doesn't want humans to be succeeded by the mindless equivalent of vacuum cleaners. That doesn't mean "worthy" AIs will be concerned about humans; even the hoped-for worthy successor is unlikely to care enough about us to keep us around indefinitely, if at all.
"Purely anthropocentric moral aspirations," he summed up, "are untenable."
I'm not so sure.
While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.
The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence. The late MIT professor Joseph Weizenbaum, a pioneer AI researcher in the 1960s who created the first known chatbot, became a fierce critic of much AI research. He summed up Moravec's attitude bluntly: "He despises the body."
The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right," but this time with higher intelligence as the supposed trump card. That is, it confers a superior claim to existence. Faggella, in an essay titled "Rightful Misanthropy," asked the rhetorical question, "Why maintain a species of biological husks" -- that is, humans -- "when vastly superior intelligences can be cultivated?"
One possible response is the Judeo-Christian idea that humankind was uniquely created in God's image. Of course, the Cheerful Apocalyptics would see any such spiritual belief as inadmissible.
But their view of intelligence alone as conferring rightful supremacy is itself a spiritual belief that needs to be defended or rejected. What does it imply for the moral rights of less intelligent humans versus smarter ones? What does it mean for theories of justice that are founded on the equal moral worth of persons?
The whole school of thought can sometimes feel like the ultimate revenge fantasy of disaffected smart kids, for whom the triumph of their AI proxies amounts to sweet victory over lesser mortals. Lanier suggested to me that some people in elite AI circles seemingly embraced the ideas of the Cheerful Apocalyptics because they grew up identifying with the nonbiological villains in science fiction movies, such as those of the Terminator and Matrix franchises. "Even if the AIs in those movies are kind of evil, they're superior, and from their perspective, people are just a nuisance to be gotten rid of."
Weizenbaum recognized this problem early on, denouncing the idea that "the machine becomes the measure of the human being." In 1998 he told an interviewer, "I believe the essential common ground between National Socialism and the ideas of Hans Moravec lies in the degradation of the human and the fantasy of a perfect new man that must be created at all costs. At the end of this perfection, however, man is no longer there."
Like some other radical doctrines, those of the Cheerful Apocalyptics amount to a closed system. If you resist belief, your views can be dismissed: either you're infected with the pro-human mind virus or you're biased by human arrogance. Fortunately for humankind, our biases in favor of our species would indeed be a powerful barrier to the acceptance of human extinction, provided that its proponents proclaim them in the open and not just at parties and salons and behind laboratory doors.
"Do we really want more of what we have now?" Moravec once asked. "More millennia of the same old human soap opera?" I, for one, say yes.
---
David A. Price is the author of "Geniuses at War: Bletchley Park, Colossus, and the Dawn of the Digital Age." His forthcoming comic novel is "The Underachiever".” [1]
1. REVIEW --- AI Doom? No Problem. --- Governments and experts are worried that a superintelligent AI could destroy humanity. For the 'Cheerful Apocalyptics' in Silicon Valley, that wouldn't be a bad thing. Price, David A. Wall Street Journal, Eastern edition; New York, N.Y.. 04 Oct 2025: C1.
Komentarų nėra:
Rašyti komentarą