"Having been confined to academic discussions and tech conference gab-fests in years past, the question of artificial intelligence has finally caught the public's attention. ChatGPT, Dall-E 2 and other new programs have demonstrated that computers can generate sentences and images that closely resemble man-made creations.
AI researchers are sounding the alarm. In March, some of the field's leading lights joined with tech pioneers such as Elon Musk and Steve Wozniak to call for a six-month pause on AI experiments because "advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources." Last month the leadership of OpenAI, which created ChatGPT, called for "something like an IAEA" -- the International Atomic Energy Agency -- "for superintelligence efforts," which would "inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc." If the alarmists are right about the dangers of AI and try to fix it with an IAEA-style solution, then we're already doomed.
Nuclear analogies feature prominently in conversations about AI risk. OpenAI CEO Sam Altman, who has previously asked Congress to regulate AI, frequently brings up the Manhattan Project when discussing his company. Henry Kissinger, not known for wild-eyed alarmism, has said that "AI could be as consequential as the advent of nuclear weapons -- but even less predictable."
The IAEA model is appealing for many people concerned about AI because its goals are analogous to theirs. The agency's mission is to promote the peaceful use of nuclear energy while preventing the spread of nuclear weapons. Many AI researchers would like their inventions to promote human flourishing without increasing human destructiveness.
Unfortunately, the technical methods of nuclear-style arms control can't be applied to computing programs. While nuclear weapons require significant infrastructure -- reactors, enrichment facilities and the like -- AI has a much smaller footprint. The OpenAI team thinks there may be a solution to this problem, since "tracking compute and energy usage could go a long way" to monitoring compliance.
It would still come up short. As the OpenAI team admits, "the technical capability to make a superintelligence safe" is "an open research question." Without a known standard, there is no way to measure compliance.
The IAEA's primary failing isn't measuring compliance but enforcing it. When the agency was founded in 1957, there were three nuclear states: the U.S., the Soviet Union and Britain. Under the IAEA's watch, that number has tripled.
Geopolitical rivalries caused nuclear proliferation during the Cold War, when France, China and India detonated nuclear devices and Israel developed its own bomb. Although the Americans and Soviets wanted to restrict nuclear weapons, neither wanted to weaken their coalitions by alienating partners who were on the pathway to the bomb.
The IAEA failed in its nonproliferation mission even after the Soviet Union collapsed. Pakistan and North Korea have tested the bomb since then, and Iran is enriching uranium near weapons grade. Israel has had some notable nonproliferation successes in Iraq and Syria, but generally only a great power can enforce IAEA compliance. With the dubious exception of Iraq, the U.S. has decided that nuclear proliferation is preferable to war.
China's growing power and belligerence is one of many factors that will make an international AI-control regime challenging. China has been one of the most profligate nuclear proliferators, and it is likely to push the boundaries of weapons-related AI -- because AI is less likely to launch a coup or disobey orders than China's military leadership, is and because Beijing will be able to control international partners that rely on Chinese-developed AI.
But even friendly democracies will chafe at AI limitations. Computing technologies will unlock tremendous economic benefits, and many of these inventions will have military implications. The American military will have to use AI, and while most of America's allies and partners are willing to shelter under its nuclear umbrella, they aren't likely to choke off their own economic growth if Washington imposes an AI double standard.
Ironically, OpenAI wants to adopt the IAEA model even as nuclear nonproliferation's future has grown increasingly dubious. North Korea's and Iran's nuclear programs are forcing American partners and allies across Asia to reassess how much they need a nuclear weapon.
Ultimately, nonproliferation and AI risk share the same weakness: They underestimate the human factor.
---
Mr. Watson is associate director of the Hudson Institute's Center for the Future of Liberal Society." [1]
1. IAEA for AI? That Model Has Already Failed. Watson, Mike.
Wall Street Journal, Eastern edition; New York, N.Y. [New York, N.Y]. 02 June 2023: A.15.
Komentarų nėra:
Rašyti komentarą