“The tech giant has publicly released its latest A.I.
technology so people can build their own chatbots. Rivals like Google say that
approach can be dangerous.
In February, Meta made an unusual move in the rapidly
evolving world of artificial intelligence: It decided to give away its A.I.
crown jewels.
The Silicon Valley giant, which owns Facebook, Instagram and
WhatsApp, had created an A.I. technology, called LLaMA, that can power online
chatbots. But instead of keeping the technology to itself, Meta released the
system’s underlying computer code into the wild.
Academics, government researchers and others who gave their
email address to Meta could download the code once the company had vetted the
individual.
Essentially, Meta was giving its A.I. technology away as
open-source software — computer code that can be freely copied, modified and
reused — providing outsiders with everything they needed to quickly build
chatbots of their own.
“The platform that will win will be the open one,” Yann
LeCun, Meta’s chief A.I. scientist, said in an interview.
As a race to lead A.I. heats up across Silicon Valley, Meta
is standing out from its rivals by taking a different approach to the
technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta
believes that the smartest thing to do is share its underlying A.I. engines as
a way to spread its influence and ultimately move faster toward the future.
Its actions contrast with those of Google and OpenAI, the
two companies leading the new A.I. arms race. Worried that A.I. tools like
chatbots will be used to spread disinformation, hate speech and other toxic
content, those companies are becoming increasingly secretive about the methods
and software that underpin their A.I. products.
Google, OpenAI and others have been critical of Meta, saying
an unfettered open-source approach is dangerous. A.I.’s rapid rise in recent
months has raised alarm bells about the technology’s risks, including how it
could upend the job market if it is not properly deployed. And within days of
LLaMA’s release, the system leaked onto 4chan, the online message board known
for spreading false and misleading information.
“We want to think more carefully about giving away details
or open sourcing code” of A.I. technology, said Zoubin Ghahramani, a Google
vice president of research who helps oversee A.I. work. “Where can that lead to
misuse?”
Some within Google have also wondered if open-sourcing A.I.
technology may pose a competitive threat. In a memo this month, which was
leaked on the online publication Semianalysis.com, a Google engineer warned
colleagues that the rise of open-source software like LLaMA could cause Google
and OpenAI to lose their lead in A.I.
But Meta said it saw no reason to keep its code to itself.
The growing secrecy at Google and OpenAI is a “huge mistake,” Dr. LeCun said,
and a “really bad take on what is happening.” He argues that consumers and
governments will refuse to embrace A.I. unless it is outside the control of companies
like Google and Meta.
“Do you want every A.I. system to be under the control of a
couple of powerful American companies?” he asked.
OpenAI declined to comment.
Meta’s open-source approach to A.I. is not novel. The
history of technology is littered with battles between open source and
proprietary, or closed, systems. Some hoard the most important tools that are
used to build tomorrow’s computing platforms, while others give those tools
away. Most recently, Google open-sourced the Android mobile operating system to
take on Apple’s dominance in smartphones.
Many companies have openly shared their A.I. technologies in
the past, at the insistence of researchers. But their tactics are changing
because of the race around A.I. That shift began last year when OpenAI released
ChatGPT. The chatbot’s wild success wowed consumers and kicked up the
competition in the A.I. field, with Google moving quickly to incorporate more A.I.
into its products and Microsoft investing $13 billion in OpenAI.
While Google, Microsoft and OpenAI have since received most
of the attention in A.I., Meta has also invested in the technology for nearly a
decade. The company has spent billions of dollars building the software and the
hardware needed to realize chatbots and other “generative A.I.,” which produce
text, images and other media on their own.
In recent months, Meta has worked furiously behind the
scenes to weave its years of A.I. research and development into new products.
Mr. Zuckerberg is focused on making the company an A.I. leader, holding weekly
meetings on the topic with his executive team and product leaders.
On Thursday, in a sign of its commitment to A.I., Meta said
it had designed a new computer chip and improved a new supercomputer
specifically for building A.I. technologies. It is also designing a new
computer data center with an eye toward the creation of A.I.
“We’ve been building advanced infrastructure for A.I. for
years now, and this work reflects long-term efforts that will enable even more
advances and better use of this technology across everything we do,” Mr.
Zuckerberg said.
Meta’s biggest A.I. move in recent months was releasing
LLaMA, which is what is known as a large language model, or L.L.M. (LLaMA
stands for “Large Language Model Meta AI.”) L.L.M.s are systems that learn
skills by analyzing vast amounts of text, including books, Wikipedia articles
and chat logs. ChatGPT and Google’s Bard chatbot are also built atop such systems.
L.L.M.s pinpoint patterns in the text they analyze and learn
to generate text of their own, including term papers, blog posts, poetry and
computer code. They can even carry on complex conversations.
In February, Meta openly released LLaMA, allowing academics,
government researchers and others who provided their email address to download
the code and use it to build a chatbot of their own.
But the company went further than many other open-source
A.I. projects. It allowed people to download a version of LLaMA after it had
been trained on enormous amounts of digital text culled from the internet.
Researchers call this “releasing the weights,” referring to the particular
mathematical values learned by the system as it analyzes data.
This was significant because analyzing all that data
typically requires hundreds of specialized computer chips and tens of millions
of dollars, resources most companies do not have. Those who have the weights
can deploy the software quickly, easily and cheaply, spending a fraction of
what it would otherwise cost to create such powerful software.
As a result, many in the tech industry believed Meta had set
a dangerous precedent. And within days, someone released the LLaMA weights onto
4chan.
At Stanford University, researchers used Meta’s new
technology to build their own A.I. system, which was made available on the
internet. A Stanford researcher named Moussa Doumbouya soon used it to generate
problematic text, according to screenshots seen by The New York Times. In one
instance, the system provided instructions for disposing of a dead body without
being caught. It also generated racist material, including comments that
supported the views of Adolf Hitler.
In a private chat among the researchers, which was seen by
The Times, Mr. Doumbouya said distributing the technology to the public would
be like “a grenade available to everyone in a grocery store.” He did not
respond to a request for comment.
Stanford promptly removed the A.I. system from the internet.
The project was designed to provide researchers with technology that “captured
the behaviors of cutting-edge A.I. models,” said Tatsunori Hashimoto, the
Stanford professor who led the project. “We took the demo down as we became
increasingly concerned about misuse potential beyond a research setting.”
Dr. LeCun argues that this kind of technology is not as
dangerous as it might seem. He said small numbers of individuals could already
generate and spread disinformation and hate speech. He added that toxic
material could be tightly restricted by social networks such as Facebook.
“You can’t prevent people from creating nonsense or
dangerous information or whatever,” he said. “But you can stop it from being
disseminated.”
For Meta, more people using open-source software can also
level the playing field as it competes with OpenAI, Microsoft and Google. If
every software developer in the world builds programs using Meta’s tools, it
could help entrench the company for the next wave of innovation, staving off
potential irrelevance.
Dr. LeCun also pointed to recent history to explain why Meta
was committed to open-sourcing A.I. technology. He said the evolution of the
consumer internet was the result of open, communal standards that helped build
the fastest, most widespread knowledge-sharing network the world had ever seen.
“Progress is faster when it is open,” he said. “You have a
more vibrant ecosystem where everyone can contribute.””
Komentarų nėra:
Rašyti komentarą