“Seven leading A.I. companies in the United States have
agreed to voluntary safeguards on the technology’s development, the White House
announced on Friday, pledging to manage the risks of the new tools even as they
compete over the potential of artificial intelligence.
The seven companies — Amazon, Anthropic, Google, Inflection,
Meta, Microsoft and OpenAI — formally made their commitment to new standards
for safety, security and trust at a meeting with President Biden at the White
House on Friday afternoon.
“We must be cleareyed and vigilant about the threats
emerging from emerging technologies that can pose — don’t have to but can pose
— to our democracy and our values,” Mr. Biden said in brief remarks from the
Roosevelt Room at the White House.
“This is a serious responsibility; we have to get it right,”
he said, flanked by the executives from the companies. “And there’s enormous,
enormous potential upside as well.”
The announcement comes as the companies are racing to outdo
each other with versions of A.I. that offer powerful new ways to create text,
photos, music and video without human input. But the technological leaps have
prompted fears about the spread of disinformation and dire warnings of a “risk
of extinction” as artificial intelligence becomes more sophisticated and
humanlike.
The voluntary safeguards are only an early, tentative step
as Washington and governments across the world seek to put in place legal and
regulatory frameworks for the development of artificial intelligence. The
agreements include testing products for security risks and using watermarks to
make sure consumers can spot A.I.-generated material.
But lawmakers have struggled to regulate social media and
other technologies in ways that keep up with the rapidly evolving technology.
The White House offered no details of a forthcoming
presidential executive order that aims to deal with another problem: how to
control the ability of China and other competitors to get ahold of the new
artificial intelligence programs, or the components used to develop them.
The order is expected to involve new restrictions on
advanced semiconductors and restrictions on the export of the large language
models. Those are hard to secure — much of the software can fit, compressed, on
a thumb drive.
An executive order could provoke more opposition from the
industry than Friday’s voluntary commitments, which experts said were already
reflected in the practices of the companies involved. The promises will not
restrain the plans of the A.I. companies nor hinder the development of their
technologies. And as voluntary commitments, they will not be enforced by
government regulators.
“We are pleased to
make these voluntary commitments alongside others in the sector,” Nick Clegg,
the president of global affairs at Meta, the parent company of Facebook, said
in a statement. “They are an important first step in ensuring responsible
guardrails are established for A.I. and they create a model for other
governments to follow.”
As part of the safeguards, the companies agreed to security
testing, in part by independent experts; research on bias and privacy concerns;
information sharing about risks with governments and other organizations;
development of tools to fight societal challenges like climate change; and
transparency measures to identify A.I.-generated material.
In a statement announcing the agreements, the Biden
administration said the companies must ensure that “innovation doesn’t come at
the expense of Americans’ rights and safety.”
“Companies that are developing these emerging technologies
have a responsibility to ensure their products are safe,” the administration
said in a statement.
Brad Smith, the president of Microsoft and one of the
executives attending the White House meeting, said his company endorsed the
voluntary safeguards.
“By moving quickly, the White House’s commitments create a
foundation to help ensure the promise of A.I. stays ahead of its risks,” Mr.
Smith said.
Anna Makanju, the vice president of global affairs at
OpenAI, described the announcement as “part of our ongoing collaboration with
governments, civil society organizations and others around the world to advance
AI governance.”
For the companies, the standards described Friday serve two
purposes: as an effort to forestall, or shape, legislative and regulatory moves
with self-policing, and a signal that they are dealing with the new technology
thoughtfully and proactively.
But the rules on which they agreed are largely the lowest
common denominator, and can be interpreted by every company differently. For
example, the firms committed to strict cybersecurity measures around the data
used to make the language models on which generative A.I. programs are
developed. But there is no specificity about what that means, and the companies
would have an interest in protecting their intellectual property anyway.
And even the most careful companies are vulnerable. Microsoft,
one of the firms attending the White House event with Mr. Biden, scrambled last
week to counter a Chinese government-organized hack on the private emails of
American officials who were dealing with China. It now appears that China
stole, or somehow obtained, a “private key” held by Microsoft that is the key
to authenticating emails — one of the company’s most closely guarded pieces of
code.
Given such risks, the agreement is unlikely to slow the
efforts to pass legislation and impose regulation on the emerging technology.
Paul Barrett, the deputy director of the Stern Center for
Business and Human Rights at New York University, said that more needed to be
done to protect against the dangers that artificial intelligence posed to
society.
“The voluntary commitments announced today are not
enforceable, which is why it’s vital that Congress, together with the White
House, promptly crafts legislation requiring transparency, privacy protections,
and stepped-up research on the wide range of risks posed by generative A.I.,”
Mr. Barrett said in a statement.
European regulators are poised to adopt A.I. laws later this
year, which has prompted many of the companies to encourage U.S. regulations.
Several lawmakers have introduced bills that include licensing for A.I.
companies to release their technologies, the creation of a federal agency to
oversee the industry, and data privacy requirements. But members of Congress
are far from agreement on rules.
Lawmakers have been grappling with how to address the ascent
of A.I. technology, with some focused on risks to consumers and others acutely
concerned about falling behind adversaries, particularly China, in the race for
dominance in the field.
This week, the House committee on competition with China
sent bipartisan letters to U.S.-based venture capital firms, demanding a
reckoning over investments they had made in Chinese A.I. and semiconductor
companies. For months, a variety of House and Senate panels have been
questioning the A.I. industry’s most influential entrepreneurs and critics to
determine what sort of legislative guardrails and incentives Congress ought to
be exploring.
Many of those witnesses, including Sam Altman of OpenAI,
have implored lawmakers to regulate the A.I. industry, pointing out the
potential for the new technology to cause undue harm. But that regulation has
been slow to get underway in Congress, where many lawmakers still struggle to
grasp what exactly A.I. technology is.
In an attempt to improve lawmakers’ understanding, Senator
Chuck Schumer, Democrat of New York and the majority leader, began a series of
sessions this summer to hear from government officials and experts about the
merits and dangers of artificial intelligence across a number of fields."
Komentarų nėra:
Rašyti komentarą