"The company said it would give
outside programmers access to the latest version of its core artificial
intelligence technology.
The largest companies in the tech
industry have spent the year warning that development of artificial
intelligence technology is outpacing their wildest expectations and that they
need to limit who has access to it.
Mark Zuckerberg is doubling down on
a different tack: He’s giving it away.
Mr. Zuckerberg, the chief executive
of Meta, said on Tuesday that he planned to provide the code behind the
company’s latest and most advanced A.I. technology to developers and software
enthusiasts around the world free of charge.
The decision, similar to one that Meta made in February, could help
the company reel in competitors like Google and Microsoft. Those companies have
moved more quickly to incorporate generative artificial intelligence — the
technology behind OpenAI’s popular ChatGPT chatbot — into their products.
“When software is open, more people
can scrutinize it to identify and fix potential issues,” Mr. Zuckerberg said in
a post to his personal Facebook page.
The latest version of Meta’s A.I.
was created with 40 percent more data than what the company released just a few
months ago and is believed to be considerably more powerful. And Meta is
providing a detailed road map that shows how developers can work with the vast
amount of data it has collected.
Researchers worry that generative
A.I. can supercharge the amount of disinformation and spam on the internet, and
presents dangers that even some of
its creators do not entirely understand.
Meta is sticking to a long-held belief
that allowing all sorts of programmers to tinker with technology is the best
way to improve it. Until recently, most A.I. researchers agreed with that. But
in the past year, companies like Google and OpenAI, a San Francisco start-up
that is working closely with Microsoft, have set limits on who has access to
their latest technology and placed controls around what can be done with it.
The companies say they are limiting
access because of safety concerns, but critics say they are also trying to
stifle competition. Meta argues that it is in everyone’s best interest to share
what it is working on.
“Meta has historically been a big
proponent of open platforms, and it has really worked well for us as a
company,” said Ahmad Al-Dahle, vice president of generative A.I. at Meta, in an
interview.
The move will make the software
“open source,” which is computer code that can be freely copied, modified and
reused. The technology, called LLaMA 2, provides everything anyone would need
to build online chatbots like ChatGPT. LLaMA 2 will be released under a
commercial license, which means developers can build their own businesses using
Meta’s underlying A.I. to power them — all for free.
By open-sourcing LLaMA 2, Meta can
capitalize on improvements made by programmers from outside the company while —
Meta executives hope — spurring A.I. experimentation.
Meta’s open-source approach is not new.
Companies often open-source technologies in an effort to catch up with rivals.
Fifteen years ago, Google open-sourced its Android mobile operating system to
better compete with Apple’s iPhone. While the iPhone had an early lead, Android
eventually became the dominant software used in smartphones.
But researchers argue that someone
could deploy Meta’s A.I. without the safeguards that tech giants like Google
and Microsoft often use to suppress toxic content. Newly created open-source
models could be used, for instance, to flood the internet with even more spam,
financial scams and disinformation.
LLaMA 2, short for Large Language
Model Meta AI, is what scientists call a large language model, or L.L.M.
Chatbots like ChatGPT and Google Bard are built with large language models.
The models are systems that learn
skills by analyzing enormous volumes of digital text, including Wikipedia
articles, books, online forum conversations and
chat logs. By pinpointing patterns in the text, these systems learn to generate
text of their own, including term papers, poetry and computer code. They can
even carry on a conversation.
Meta is teaming up with Microsoft to
open-source LLaMA 2, which will run on Microsoft’s Azure cloud services. LLaMA
2 will also be available through other providers, including Amazon Web Services
and the company HuggingFace.
Dozens of Silicon Valley
technologists signed a statement of support
for the initiative, including the venture capitalist Reid Hoffman and
executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.
Meta is not the only company to push
for open-source A.I. projects. The Technology Innovation Institute produced
Falcon LLM and published the code freely this year. Mosaic ML also offers
open-source software for training L.L.M.s.
Meta executives argue that their
strategy is not as risky as many believe. They say that people can already
generate large amounts of disinformation and hate speech without using A.I.,
and that such toxic material can be tightly restricted by Meta’s social
networks such as Facebook. They maintain that releasing the technology can
eventually strengthen the ability of Meta and other companies to fight back
against abuses of the software.
Meta did additional “Red Team”
testing of LLaMA 2 before releasing it, Mr. Al-Dahle said. That is a term for
testing software for potential misuse and figuring out ways to protect against
such abuse. The company will also release a responsible-use guide containing
best practices and guidelines for developers who wish to build programs using
the code.
But these tests and guidelines apply
to only one of the models that Meta is releasing, which will be trained and
fine-tuned in a way that contains guardrails and inhibits misuse. Developers
will also be able to use the code to create chatbots and programs without
guardrails, a move that skeptics see as a risk.
In February, Meta released the first
version of LLaMA to academics, government researchers and others. The company
also allowed academics to download LLaMA after it had been trained on vast
amounts of digital text. Scientists call this process “releasing the weights.”
It was a notable move because
analyzing all that digital data requires vast computing and financial
resources. With the weights, anyone can build a chatbot far more cheaply and
easily than from scratch.
Many in the tech industry believed
Meta set a dangerous precedent, and after Meta shared its A.I. technology with a small group
of academics in February, one of the researchers leaked the
technology onto the public internet.
In a recent opinion piece in The Financial Times,
Nick Clegg, Meta’s president of global public policy, argued that it was “not
sustainable to keep foundational technology in the hands of just a few large
corporations,” and that historically companies that released open source
software had been served strategically as well.
“I’m looking forward to seeing what
you all build!” Mr. Zuckerberg said in his post."
Komentarų nėra:
Rašyti komentarą