"In under two years, OpenAI has gone from a little-known nonprofit lab working on obscure technology to a world-famous business whose chief executive is the face of the artificial-intelligence revolution.
That change is tearing the company apart.
On Wednesday, OpenAI's chief technology officer became the latest high-profile executive to announce an exit, departing as the company prepares to become a for-profit corporation. The exits are public eruptions of tensions that have been growing in the company behind ChatGPT since CEO Sam Altman returned following his brief ouster last year.
Some tensions are related to conflicts between OpenAI's original mission to develop AI for the public good and new initiatives to deploy moneymaking products.
Others relate to chaos and infighting among executives worthy of a soap opera.
CTO Mira Murati is one of more than 20 OpenAI researchers and executives who have quit this year, including several co-founders.
Current and former employees say OpenAI has rushed product announcements and safety testing, and lost its lead over rival AI developers.
They say Altman has been largely detached from the day-to-day -- a characterization the company disputes -- as he has flown around the globe promoting AI and his plans to raise huge sums of money to build chips and data centers for AI to work.
OpenAI's CFO wrote to investors that it is on track to close a planned $6.5 billion funding round by next week, and would host calls afterward to introduce them to key leaders from its product and research teams.
OpenAI's focus on making steady improvements to ChatGPT and other products has borne fruit. Its annualized revenue -- a projection of yearly receipts based on recent results -- recently hit about $4 billion, more than triple from the same time last year. It is still losing billions a year, however.
Continued growth will depend on maintaining its technological edge. The company's next foundational model GPT-5 -- expected to be a major leap in its development -- has faced setbacks and delays. Meanwhile, rival companies have launched AI models roughly on par with what OpenAI is offering.
Two of them, Anthropic and Elon Musk's xAI, were started by former OpenAI leaders.
The intensifying competition has frustrated researchers who valued working at OpenAI because it was the perceived leader in the space.
An OpenAI spokeswoman declined to respond to most specific points in this article. "We don't agree with these characterizations, but recognize that evolving from an unknown research lab into a global company that delivers advanced AI research to hundreds of millions of people in just two years requires growth and adaptation," she said, adding that Altman has been very engaged in company strategy and hiring and has driven the build-out of its product division.
"We are deeply committed to our mission and are proud to release the most capable and safest models in the industry," she said.
Wall Street Journal owner News Corp has a content-licensing partnership with OpenAI.
This account is based on interviews with current and former employees, as well as people close to the company.
OpenAI employees call Altman's firing and unfiring last November "the blip" because it lasted just days. But the repercussions are still working their way through the company.
The first sign was the sudden absence of one of OpenAI's co-founders and most respected research scientists, Ilya Sutskever.
He delivered the news to Altman that he was fired, then publicly apologized for his role. He never returned to work in the office.
In May, Sutskever resigned. Soon after, Jan Leike, who co-led a safety team with Sutskever, quit as well. OpenAI executives worried their departures might trigger a larger exodus and worked to get Sutskever back.
Murati and President Greg Brockman told Sutskever that the company was in disarray and might collapse without him. They visited his home, bringing him cards and letters urging him to return.
Altman visited as well and expressed regret that others at OpenAI hadn't found a solution.
Sutskever indicated to his ex-colleagues he was seriously considering coming back. But soon after, Brockman called and said OpenAI was rescinding the offer for him to return.
Internally, executives had trouble determining what Sutskever's new role would be and how he would work alongside other researchers.
Soon after, Sutskever launched a company focused on developing the most advanced AI. Called Safe Superintelligence, it has raised $1 billion.
Sutskever hasn't publicly commented on circumstances of his departure.
In a May 17 post on X, Leike said: "I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point . . . over the past years, safety culture and processes have taken a back seat to shiny products."
He went to work for Anthropic.
This spring, tensions flared over development of a new AI model called GPT-4o that would power ChatGPT and business products. Researchers were asked to do more comprehensive safety testing than initially planned, but given only nine days to do it. Executives wanted to debut 4o ahead of Google's annual developer conference and take attention from their bigger rival.
The safety staffers worked 20-hour days and didn't have time to double check their work. The initial results, based on incomplete data, indicated GPT-4o was safe enough to deploy.
But after the model launched, people familiar with the project said a subsequent analysis found the model exceeded OpenAI's internal standards for persuasion -- defined as the ability to create content that can persuade people to change their beliefs and engage in potentially dangerous or illegal behavior.
The team flagged the problem to senior executives and worked on a fix. But some employees were frustrated by the process, saying that if the company had taken more time for safety testing, they could have addressed the problem before it got to users.
The OpenAI spokeswoman said that the higher-risk indicators the team detected were erroneously elevated by a flaw in the methodology, and that GPT-4o was safe to deploy under the company's criteria. OpenAI "continues to be confident in 4o's medium risk assessment," she said.
The rush to deploy GPT-4o was part of a pattern that affected technical leaders like Murati. The CTO repeatedly delayed planned launches of products including search and voice interaction because she thought they weren't ready.
Other senior staffers also were growing unhappy.
John Schulman, another co-founder and top scientist, told colleagues he was frustrated over OpenAI's internal conflicts, disappointed in the failure to woo back Sutskever and concerned about the diminishing importance of its original mission.
In August, he left for Anthropic.
In addition to the other executive departures, one of Altman's key lieutenants -- Brockman -- is on sabbatical.
When OpenAI was founded in 2015, it originally operated out of Brockman's living room. Later, he got married at the company's offices on a workday. But as OpenAI grew, his management style caused tension. Though president, Brockman didn't have direct reports. He tended to get involved in any projects he wanted, often frustrating those involved, according to current and former employees.
For years, staffers urged Altman to rein in Brockman. Those concerns persisted through this year, when Altman and Brockman agreed he should take a leave of absence.
Brockman is expected to return.
But leadership ranks have been depleted. On the same day Murati resigned, OpenAI's chief research officer and vice president of research left as well.
Altman now needs to strengthen his executive team, try to close a multibillion-dollar fundraising round vital to the company, and begin the complex process of converting into a for-profit company. Investors in the new round will be able to pull back their money if OpenAI doesn't complete the conversion within two years.
And he has to do it all while keeping up morale.
One OpenAI employee on the company's technical team wryly posted on X Wednesday night that, "Today I have made the difficult decision to stay at OpenAI."" [1]
1. Exits at OpenAI Lay Bare Internal Tensions --- Key executives have left amid disputes over company values as it changes rapidly. Seetharaman, Deepa. Wall Street Journal, Eastern edition; New York, N.Y.. 28 Sep 2024: A.1.
Komentarų nėra:
Rašyti komentarą