"Recently while sharing a meal an acquaintance said something arresting. We were speaking, as happy pessimists do, about where the 21st century went wrong. We're almost a quarter-century into it, it's already taken on a certain general shape and character, and I'm not sure I see much good in it beyond advances in medicine and science. He said he was working on a theory: The 21st century so far has been a reverse Y2K.
By 12/31/99 the world was transfixed by a fear that all its mighty computers would go crazy as 23:59:59 clicked to 0:00:00. They wouldn't be able to transfer over to 2000. The entire system would have a hiccup and the lights go out. It didn't happen. Remedies were invented and may have saved the day.
It is since 2000, the acquaintance said, that the world's computers have caused havoc, in the social, cultural and political spheres. Few worried, watched or took countering steps. After all, 2000 turned out all right, so this probably would too. We accepted all the sludge -- algorithms designed to divide us, to give destructive messages to kids, to addict them to the product -- passively, without alarm.
We are accepting artificial intelligence the same way, passively, and hoping its promised benefits (in medicine and science again) will outweigh its grave and obvious threat. That threat is one Henry Kissinger warned of in these pages early this year. "What happens if this technology cannot be completely contained?" he and his co-authors asked. "What if an element of malice emerges in the AI?" Kissinger was a great diplomat and historian, but he had the imagination of an artist. AI and the possibility of nuclear war were the two great causes of his last years. He was worried about where this whole modern contraption was going.
I've written that a great icon of the age, the Apple logo -- the apple with the bite taken out of it -- seemed to me a conscious or unconscious expression that those involved in the development of our modern tech world understood on some level that their efforts were taking us back to Eden, to the pivotal moment when Eve and Adam ate the forbidden fruit. The serpent told Eve they would become all-knowing like God, in fact equal to God, and that is why God didn't want them to have it. She bit, and human beings were banished from the kindly garden and thrown into the rough cruel world. I believe those creating, fueling and funding AI want, possibly unconsciously, to be God, and think on some level they are God.
Many have warned of the destructive possibilities and capabilities of AI, but there are important thoughts on this in a recent New Yorker piece on Geoffrey Hinton, famously called the godfather of AI. It is a brilliantly written and thought-through profile by Joshua Rothman.
Mr. Hinton, 75, a Turing Award winner, had spent 30 years as a professor of computer science at the University of Toronto. He studied neural networks. Later he started a small research company that was bought by Google, and he worked there until earlier this year. Soon after leaving he began to warn of the "existential threat" AI poses. The more he used ChatGPT, the more uneasy he became. He worries that AI systems may start to think for themselves; they may attempt to take over human civilization, or eliminate it.
Mr. Hinton told Mr. Rothman that once, early in his research days, he saw a "frustrated AI." It was a computer attached to two TV cameras and a robot arm. The computer was told to assemble some blocks spread on a table. It tried, but it knew only how to recognize individual blocks. A pile of them left it confused. "It pulled back a little bit, and went bash," knocking them all around. "It couldn't deal with what was going on, so it changed it, violently."
Mr. Hinton says he doesn't regret his work, but he fears what powerful people will do with AI. Some might create "an autonomous lethal weapon" -- a self-directing killing machine. Mr. Hinton believes such weapons should be outlawed. But even benign autonomous systems can be destructive.
He believes AI can be approached with one of two attitudes, denial or stoicism. When people first hear of its potential for destruction they say it's not worth it, we have to stop it. But stopping it is a fantasy. "We need to think, How do we make it not as awful for humanity as it might be?"
Why, Mr. Rothman asks, don't we just unplug it? AI requires giant servers and data centers, all of which run on electricity.
I was glad to see this question asked, because I have wondered it too.
Mr. Hinton said it's reasonable to ask if we wouldn't be better off without AI. "But it's not going to happen. Because of the way society is. And because of the competition between different nations." If the United Nations worked, maybe it could stop it. But China isn't going to.
I found this argument, which AI enthusiasts always make, more a rationale than a thought. If China took to hunting children for sport, would we do it? (Someone reading this in Silicon Valley, please say no.)
What is most urgently disturbing to me is that if America speeds forward with AI it is putting the fate of humanity in the hands of the men and women of Silicon Valley, who invented the internet as it is, including all its sludge. And there's something wrong with them. They're some new kind of human, brilliant in a deep yet narrow way, prattling on about connection and compassion but cold at the core. They seem apart from the great faiths of past millennia, apart from traditional moral or ethical systems or assumptions about life. C.S. Lewis once said words to the effect that empires rise and fall, cultures come and go, but the waiter who poured your coffee this morning is immortal because his soul is immortal. Such a thought would be familiar to many readers but would leave Silicon Valley blinking with bafflement. They're modern and beyond beyond. This one injects himself with the blood of people in their 20s in his quest for longevity; that one embraces extreme fasting. The Journal this summer reported on Silicon Valley executives: "Routine drug use has moved from an after-hours activity squarely into corporate culture." They see psychedelics -- Ketamine, hallucinogenic mushrooms -- as "gateways to business breakthroughs."
Yes, by all means put the fate of the world in their hands. They're not particularly steady. OpenAI's Sam Altman, 38, the face of the movement, was famously fired last week and rehired days later, and no one seems to know for sure what it was about. You'd think we have a right to know. There was a story it was all due to an internal memo alerting the board to a dangerous new AI development. A major investor said this isn't true, which makes me feel so much better.
We are putting the fate of humanity in the hands of people not capable of holding it. We have to focus as if this is Y2K, only real." [1]
1. Declarations: AI Is the Y2K Crisis, Only This Time It's Real. Noonan, Peggy. Wall Street Journal, Eastern edition; New York, N.Y.. 02 Dec 2023: A.15.
Komentarų nėra:
Rašyti komentarą