"On a balmy mid-November evening in 2023, billionaire venture capitalist Peter Thiel threw a birthday party for his husband at YESS, an avant-garde Japanese restaurant located in a century-old converted bank building in Los Angeles's Arts District. Seated next to him was his friend Sam Altman.
Thiel had backed Altman's first venture fund more than a decade before, and remained a mentor to the younger investor when Altman became the face of the artificial-intelligence revolution as the chief executive of OpenAI. OpenAI's instantly viral launch of ChatGPT in November 2022 had propelled tech stocks to one of their best years in decades. Yet Thiel was worried.
Years before he met Altman, Thiel had taken another AI-obsessed prodigy named Eliezer Yudkowsky under his wing, funding his institute, which pushed to make sure that any AI smarter than humans would be friendly to its maker. That March, Yudkowsky had argued in Time magazine that unless the current wave of AI research was halted, "literally everyone on Earth will die."
"You don't understand how Eliezer has programmed half the people in your company to believe in that stuff," Thiel warned Altman. "You need to take this more seriously."
Altman picked at his vegetarian dish and tried not to roll his eyes. This was not the first dinner where Thiel had warned him that the company had been taken over by "the EAs," by which he meant people who subscribed to effective altruism. EA had lately pivoted from trying to end global poverty to trying to prevent runaway AI from murdering humanity. Thiel had repeatedly predicted that "the AI safety people" would "destroy" OpenAI.
"Well, it was kind of true of Elon, but we got rid of Elon," Altman responded at the dinner, referring to the messy 2018 split with his co-founder, Elon Musk, who once referred to the attempt to create artificial intelligence as "summoning the demon."
Nearly 800 OpenAI employees had been riding a rocket ship and were about to have the chance to buy beachfront second homes with the imminent close of a tender offer valuing the company at $86 billion. There was no need to panic.
Altman, at 38 years old, was wrapping up the best year of a charmed career, a year in which he became a household name, met with presidents and prime ministers around the world, and -- most important within the value system of Silicon Valley -- delivered a new technology that seemed like it was very possibly going to change everything.
But as the two investing partners celebrated beneath the exposed rafters of L.A.'s hottest new restaurant, four members of OpenAI's six-person board, including two with direct ties to the EA community, were holding secret video meetings. And they were deciding whether they should fire Sam Altman -- though not because of EA.
This account is based on interviews with dozens of people who lived through one of the wildest business stories of all time -- the sudden firing of the CEO of the hottest tech company on the planet, and his reinstatement days later. At the center was a mercurial leader who kept everyone around him inspired by his technological vision, but also at times confused and unsettled by his web of secrets and misdirections.
From the start, OpenAI was set up to be a different kind of tech company, one governed by a nonprofit board with a duty not to shareholders but to "humanity." Altman had shocked lawmakers earlier in the year when he told them under oath that he owned no equity in the company he co-founded. He agreed to the unprecedented arrangement to be on the board, which required a majority of directors to have no financial ties to the company. In June 2023, he told Bloomberg TV: "The board can fire me. That's important."
Behind the scenes, the board was finding, to its growing frustration, that Altman really called the shots.
For the past year, the board had been deadlocked over which AI safety expert to add to its ranks. The board interviewed Ajeya Cotra, an AI safety expert at the EA charity Open Philanthropy, but the process stalled, largely due to foot-dragging by Altman and his co-founder Greg Brockman, who was also on the board. Altman countered with his own suggestions.
"There was a little bit of a power struggle," said Brian Chesky, the Airbnb CEO who was one of the prospective board members Altman suggested. "There was this basic thing that if Sam said the name, they must be loyal to Sam, so therefore they're gonna say no."
The dynamics got more contentious after three board members in the pro-Altman camp stepped down in quick succession in early 2023 over various conflicts of interest. That left six people on the nonprofit board that governed the for-profit AI juggernaut: Altman, his close ally Brockman, their fellow co-founder Ilya Sutskever, and three independent directors. These were Adam D'Angelo, the CEO of Quora and a former Facebook executive; Helen Toner, the director of strategy for Georgetown's Center for Security and Emerging Technology and a veteran of Open Philanthropy; and Tasha McCauley, a former tech CEO and member of the U.K. board of the EA charity Effective Ventures.
Concerns about corporate governance and the board's ability to oversee Altman became much more urgent for several board members after they saw a demo of GPT-4, a more powerful AI that could ace the AP Biology test, in the summer of 2022.
"Things like ChatGPT and GPT-4 were meaningful shifts toward the board realizing that the stakes are getting higher here," Toner said. "It's not like we are all going to die tomorrow, but the board needs to be functioning well."
Toner and McCauley had already begun to lose trust in Altman. To review new products for risks before they were released, OpenAI had set up a joint safety board with Microsoft, a key backer of OpenAI that had special access to use its technology in its products.
During one meeting in the winter of 2022, as the board weighed how to release three somewhat controversial enhancements to GPT-4, Altman claimed all three had been approved by the joint safety board. Toner asked for proof and found that only one had actually been approved.
Around the same time, Microsoft launched a test of the still-unreleased GPT-4 in India, the first instance of the revolutionary code being released in the wild, without approval from the joint safety board. And no one had bothered to inform OpenAI's board that the safety approval had been skipped. The independent board members found out when one of them was stopped by an OpenAI employee in the hallway on the way out of a six-hour board meeting. Never once in that meeting had Altman or Brockman mentioned the breach.
Then, one night in the summer of 2023, an OpenAI board member overheard a person at a dinner party discussing OpenAI's Startup Fund. The fund was launched in 2021 to invest in AI-related startups, and OpenAI had announced it would be "managed" by OpenAI. But the board member was overhearing complaints that the profits from the fund weren't going to OpenAI investors.
This was news to the board, so they asked Altman. Over months, directors learned that Altman owned the fund personally. OpenAI executives first said it had been for tax reasons, then eventually explained Altman had set up the fund because it was faster and only a "temporary" arrangement. OpenAI said Altman earned no fees or profits from the fund -- an unusual arrangement.
To the independent board members, the administrative oversight defied belief -- and cast previous oversights as part of a possible pattern of deliberate deception. For instance, they also hadn't been alerted the previous fall when OpenAI released ChatGPT, at the time considered a "research preview" that used existing technology, but that ended up taking the world by storm. (News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI.)
In late September, Sutskever emailed Toner asking if she had time to talk the next day. This was highly unusual. They didn't really talk outside of board meetings.
On the phone, Sutskever hemmed and hawed before coughing up a clue: "You should talk to Mira more."
Mira Murati had been promoted to chief technology officer of OpenAI in May 2022, and had effectively been running the day-to-day ever since. When Toner called her, Murati described how what she saw as Altman's toxic management style had been causing problems for years, and how the dynamic between Altman and Brockman -- who reported to her but would go to Altman anytime she tried to rein him in -- made it almost impossible for her to do her job.
Murati had raised some of these issues directly with Altman months earlier, and Altman had responded by bringing the head of HR to their one-on-one meetings for weeks until she finally told him she didn't intend to share her feedback with the board.
Toner went back to Sutskever, the company's oracular chief scientist. He made clear he had lost trust in Altman for numerous reasons, including his tendency to pit senior employees against each other.
In 2021, Sutskever had mapped out and launched a team to pursue the next research direction for OpenAI, but months later another OpenAI researcher, Jakub Pachocki, began pursuing something very similar. The teams merged, and Pachocki took over after Sutskever turned his focus to AI safety. Altman later elevated Pachocki to research director and privately promised both of them they could lead the research direction of the company, which led to months of lost productivity.
Sutskever had been waiting for a moment when the board dynamics would allow for Altman to be replaced as CEO.
Treading carefully, in terror of being found out by Altman, Murati and Sutskever spoke to each of the independent board members over the next few weeks. It was only because they were in daily touch that the independent directors caught Altman in a particularly egregious lie.
Toner had published a paper in October that repeated criticisms of OpenAI's approach to safety. Altman was livid. He told Sutskever that McCauley had said Toner should obviously leave the board over the article. McCauley was taken aback when she heard this account from Sutskever -- she knew she had said no such thing.
Sutskever and Murati had been collecting evidence, and now Sutskever was willing to share. He emailed Toner, McCauley and D'Angelo two lengthy pdf documents using Gmail's self-destructing email function.
One was about Altman, the other about Brockman. The Altman document consisted of dozens of examples of his alleged lies and other toxic behavior, largely backed up by screenshots from Murati's Slack channel. In one of them, Altman had told Murati that the company's legal department had said that GPT-4 Turbo didn't need to go through the joint safety board review. When Murati checked with the company's top lawyer, he said he had not said that. The document about Brockman largely focused on his alleged bullying.
If they were going to act, Sutskever warned, they had to act quickly.
And so, on the afternoon of Thursday, Nov. 16, 2023, he and the three independent board members logged into a video call and voted to fire Altman. Knowing Murati was unlikely to agree to be interim CEO if she had to report to Brockman, they also voted to remove Brockman from the board. After the vote, the independent board members told Sutskever they had been worried that he'd been sent as a spy to test their loyalty.
That night, Murati was at a conference when the four board members called her to say they were firing Altman the next day and to ask her to step in as interim CEO. She agreed. When she asked why they were firing him, they wouldn't tell her.
Altman's surprise firing instantly became an explosive headline around the world.
But the board had no answers for employees or the wider public for why Altman was fired, beyond that he had not been "consistently candid" with the board.
Friday night, OpenAI's board and executive team held a series of increasingly contentious meetings. Murati had grown concerned that the board was putting OpenAI at risk by not better preparing for the repercussions of Altman's firing. At one point, she and the rest of the executive team gave the board a 30-minute deadline to explain why they fired Altman or resign -- or else the executive team would quit en masse.
The board felt they couldn't divulge that it had been Murati who had given them some of the most detailed evidence of Altman's management failings. They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.
A narrative began to spread among Altman's allies that the whole thing was a "coup" by Sutskever, driven by his anger over Pachocki's promotion, and boosted by Toner's anger that Altman had tried to push her off the board.
Sutskever was astounded. He had expected the employees of OpenAI to cheer.
By Monday morning, almost all of them had signed a letter threatening to quit if Altman wasn't reinstated. Among the signatures were Murati's and Sutskever's. It had become clear that the only way to keep the company from imploding was to bring back Altman.
---
Adapted from "The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future" by Keach Hagey, to be published by W.W. Norton on May 20." [1]
It is great that DeepSeek now is at the top of AI research. Greed forced OpenAI employees to stick with crooked Sam Altman.
1. EXCHANGE --- The Real Reasons Sam Altman Got Fired --- Secrets, misdirection and broken trust. The inside story of how the CEO of the hottest tech company in the world was ousted and, just as quickly, resurrected. Keach Hagey. Wall Street Journal, Eastern edition; New York, N.Y.. 29 Mar 2025: B1.
Komentarų nėra:
Rašyti komentarą