"On Friday, I joined a crowd of
popcorn-crunching neighbors to watch a late-night screening of Christopher
Nolan’s “Oppenheimer.” I haven’t seen a movie theater so packed — with some
polite reshuffling as ticket holders requested their assigned seats from movie
hoppers who had sneaked in — since before the pandemic.
The film left me a little cold.
Although it is undeniably a technical achievement and a cleverly constructed
movie, the way its characters sped through overly clever, self-aware dialogue
dealing with the weighty ethics of atomic weapons felt artificial and hollow.
People don’t talk that way in real life, do they?
Workday conversations, even among
scientists and engineers, don’t usually swirl around the potentially
civilization-altering consequences of the work. And yet somehow, this summer,
it feels like they do. We’re now grappling with how a new technology may shape
the future, and arguing over the perceived dangers of proceeding with what we
have recently learned is possible.
In many ways, the question of the
year is: “What should be done about A.I.?” In an interview with The
Guardian, Nolan described leading A.I. researchers as referring to
this being their “Oppenheimer moment.” But the discussion that’s arisen from
the advances in A.I. is more deliberate and complex than any movie dialogue.
A raft of opinions have been offered
on the race for artificial intelligence, most often calls for restraint, for
caution, for a halt, for a slowdown or for a complete pause. But whom are we
arguing against? In a guest essay, Alexander Karp, the chief
executive of Palantir Technologies, which develops software with
military applications, provides us with a clear version of the other side of
the debate. He argues for embracing the battlefield-changing potential of
artificially intelligent weapons. And he makes the case that, as with the
development of the atomic bomb, it would be irresponsible for the United States
not to lead this effort.
If you look at the wars around the
world without rose-tinted glasses, it’s hard to imagine A.I.-powered weaponry
not soon playing a decisive role on the battlefield. In recent years, drone
combat has changed the nature of infantry and mechanized conflict in the
conflicts in Nagorno-Karabakh and Ukraine. It’s
also not hard to imagine that, given the recent progress in A.I., more fully
autonomous and lethal packs of battle drones might be unleashed to cloud the
skies in whatever strife flares up in the years to come.
It’s not that I want to hear this
argument because I agree with it. If the question is should we build creative
and intelligent deadly machines reminiscent of “The Terminator,” the answer to
me is clear: You don’t — and you should build consensus with your adversaries
on that point.
There’s also a second similarity
between today and when Oppenheimer’s team of scientists raced to develop the
bomb. As underscored in Nolan’s film, there was an element of uncertainty
before the first test explosion in New Mexico, known as Trinity: The physicist
Edward Teller had calculated that the blast might unleash an out-of-control
chain reaction, setting the atmosphere on fire in a nuclear blaze and ending
life on earth.
Although Teller’s scenario was
determined to be exceedingly unlikely and the test continued, a parallel fear
lurks in the hearts of those developing artificial intelligence: that a test
system, learning from our global public archive of open-source code about
machine learning, may be able to gain agency and begin to improve its own
design. If allowed to do so, it might rapidly create a form of genuinely alien
intelligence that feels no further need for humanity. This is Teller’s
atmospheric blaze scenario but for A.I.
Karp is not a disinterested party
here. His company stands to benefit from increased investment in military A.I.
At the same time, in his writing and public speaking over the years, he has
maintained a consistent perspective and vigorous critique of the culture of
Silicon Valley that considers itself above the grubby problems of national
defense. His consistency makes him an excellent candidate to make this case.
I hope you’ll read Karp’s guest
essay to understand better the type of argument that will most likely prevail
in the halls of military power unless there’s a strong, immediate global effort
to defuse it.
If current dreams of artificial
intelligence are realized, this technology will soon become a disruptive force
throughout society and an autonomous, fearsome weapon for those who would use
it to dominate the battlefield. Without credible evidence that countries like
China, Russia, Iran and North Korea would sit out this arms race, I find it
hard to imagine a version of Karp’s view not ultimately becoming our reality.
Jeremy Ashkenas is the graphics
director for Opinion. @jashkenas"
Komentarų nėra:
Rašyti komentarą