“The billionaire plans to compete with OpenAI, the ChatGPT
developer he helped found, while calling out the potential harms of artificial
intelligence.
In December, Elon Musk became angry about the development of
artificial intelligence and put his foot down.
He had learned of a relationship between OpenAI, the
start-up behind the popular chatbot ChatGPT, and Twitter, which he had bought
in October for $44 billion. OpenAI was licensing Twitter’s data — a feed of
every tweet — for about $2 million a year to help build ChatGPT, two people
with knowledge of the matter said. Mr. Musk believed the A.I. start-up wasn’t
paying Twitter enough, they said.
So Mr. Musk cut OpenAI off from Twitter’s data, they said.
Since then, Mr. Musk has ramped up his own A.I. activities,
while arguing publicly about the technology’s hazards. He is in talks with
Jimmy Ba, a researcher and professor at the University of Toronto, to build a
new A.I. company called X.AI, three people with knowledge of the matter said.
He has hired top A.I. researchers from Google’s DeepMind at Twitter. And he has
spoken publicly about creating a rival to ChatGPT that generates politically
charged material without restrictions.
The actions are part of Mr. Musk’s long and complicated
history with A.I., governed by his contradictory views on whether the
technology will ultimately benefit or destroy humanity. Even as he recently
jump-started his A.I. projects, he also signed an open letter last month
calling for a six-month pause on the technology’s development because of its
“profound risks to society.”
And although Mr. Musk is pushing back against OpenAI and
plans to compete with it, he helped found the A.I. lab in 2015 as a nonprofit.
He has since said he has grown disillusioned with OpenAI because it no longer
operates as a nonprofit and is building technology that, in his view, takes
sides in political and social debates.
What Mr. Musk’s A.I. approach boils down to is doing it
himself. The 51-year-old billionaire, who also runs the electric carmaker Tesla
and the rocket company SpaceX, has long seen his own A.I efforts as offering
better, safer alternatives than those of his competitors, according to people
who have discussed these matters with him.
“He believes that A.I. is going to be a major turning point and
that if it is poorly managed, it is going to be disastrous,” said Anthony
Aguirre, a theoretical cosmologist at the University of California, Santa Cruz,
and a founder of the Future of Life Institute, the organization behind the open
letter. “Like many others, he wonders: What are we going to do about that?”
Mr. Musk and Mr. Ba, who is known for creating a popular
algorithm used to train A.I. systems, did not respond to requests for comment.
Their discussions are continuing, the three people familiar with the matter
said.
A spokeswoman for OpenAI, Hannah Wong, said that although it
now generated profits for investors, it was still governed by a nonprofit and
its profits were capped.
Mr. Musk’s roots in A.I. date to 2011. At the time, he was
an early investor in DeepMind, a London start-up that set out in 2010 to build
artificial general intelligence, or A.G.I., a machine that can do anything the
human brain can. Less than four years later, Google acquired the 50-person
company for $650 million.
At a 2014 aerospace event at the Massachusetts Institute of
Technology, Mr. Musk indicated that he was hesitant to build A.I himself.
“I think we need to be very careful about artificial
intelligence,” he said while answering audience questions. “With artificial intelligence,
we are summoning the demon.”
That winter, the Future of Life Institute, which explores
existential risks to humanity, organized a private conference in Puerto Rico
focused on the future of A.I. Mr. Musk gave a speech there, arguing that A.I.
could cross into dangerous territory without anyone realizing it and announced
that he would help fund the institute. He gave $10 million.
In the summer of 2015, Mr. Musk met privately with several
A.I. researchers and entrepreneurs during a dinner at the Rosewood, a hotel in
Menlo Park, Calif., famous for Silicon Valley deal-making. By the end of that
year, he and several others who attended the dinner — including Sam Altman,
then president of the start-up incubator Y Combinator, and Ilya Sutskever, a
top A.I. researcher — had founded OpenAI.
OpenAI was set up as a nonprofit, with Mr. Musk and others
pledging $1 billion in donations. The lab vowed to “open source” all its
research, meaning it would share its underlying software code with the world.
Mr. Musk and Mr. Altman argued that the threat of harmful A.I. would be
mitigated if everyone, rather than just tech giants like Google and Facebook,
had access to the technology.
But as OpenAI began building the technology that would
result in ChatGPT, many at the lab realized that openly sharing its software
could be dangerous. Using A.I., individuals and organizations can potentially
generate and distribute false information more quickly and efficiently than
they otherwise could. Many OpenAI employees said the lab should keep some of
its ideas and code from the public.
In 2018, Mr. Musk resigned from OpenAI’s board, partly
because of his growing conflict of interest with the organization, two people
familiar with the matter said. By then, he was building his own A.I. project at
Tesla — Autopilot, the driver-assistance technology that automatically steers,
accelerates and brakes cars on highways. To do so, he poached a key employee from
OpenAI.
In a recent interview, Mr. Altman declined to discuss Mr.
Musk specifically, but said Mr. Musk’s breakup with OpenAI was one of many
splits at the company over the years.
“There is disagreement, mistrust, egos,” Mr. Altman said.
“The closer people are to being pointed in the same direction, the more
contentious the disagreements are. You see this in sects and religious orders.
There are bitter fights between the closest people.”
After ChatGPT debuted in November, Mr. Musk grew
increasingly critical of OpenAI. “We don’t want this to be sort of a
profit-maximizing demon from hell, you know,” he said during an interview last
week with Tucker Carlson, the former Fox News host.
Mr. Musk renewed his complaints that A.I. was dangerous and
accelerated his own efforts to build it. At a Tesla investor event last month,
he called for regulators to protect society from A.I., even though his car
company has used A.I. systems to push the boundaries of self-driving
technologies that have been involved in fatal crashes.
That same day, Mr. Musk suggested in a tweet that Twitter
would use its own data to train technology along the lines of ChatGPT. Twitter
has hired two researchers from DeepMind, two people familiar with the hiring
said. The Information and Insider earlier reported details of the hires and
Twitter’s A.I. efforts.
During the interview last week with Mr. Carlson, Mr. Musk
said OpenAI was no longer serving as a check on the power of tech giants. He
wanted to build TruthGPT, he said, “a maximum-truth-seeking A.I. that tries to
understand the nature of the universe.”
Last month, Mr. Musk registered X.AI. The start-up is
incorporated in Nevada, according to the registration documents, which also
list the company’s officers as Mr. Musk and his financial manager, Jared
Birchall. The documents were earlier reported by The Wall Street Journal.
Experts who have discussed A.I. with Mr. Musk believe he is
sincere in his worries about the technology’s dangers, even as he builds it
himself. Others said his stance was influenced by other motivations, most
notably his efforts to promote and profit from his companies.
“He says the robots are going to kill us?” said Ryan Calo, a
professor at the University of Washington School of Law, who has attended A.I.
events alongside Mr. Musk. “A car that his company made has already killed
somebody.””
Komentarų nėra:
Rašyti komentarą