Sam
Altman played with death of humanity
"Corporate power struggles aren't
rare, and most companies survive them. But the OpenAI board's sudden ouster of
CEO Sam Altman last week looks increasingly like a misguided kamikaze run. It's
hard to see who benefits from the melodrama besides opponents of innovation.
News of Mr. Altman's removal late
Friday shocked Silicon Valley. Mr. Altman helped launch OpenAI in 2015 with the
mission of responsibly developing artificial intelligence to benefit humanity.
He has since become the technology's public face, with media appearances and
testimony to Congress.
The reasons for Mr. Altman's
defenestration are murky. The board without elaboration said he hadn't been
consistently candid in his communications, and the company's chief operating
officer in a memo to employees cited a "breakdown in communications."
But recent reports suggest the coup was the result of a dispute over the future
of AI.
OpenAI was initially organized as a
nonprofit with donations from Silicon Valley leaders. But to rapidly advance
its large-language-model capabilities, it needed to raise more money. The
result was an unorthodox governance structure in which a nonprofit with an
altruistic mission and board of directors controls a for-profit subsidiary
whose profits are capped.
This halfway house let OpenAI raise
billions of dollars in private capital from the likes of Sequoia Capital,
Andreessen Horowitz and Microsoft while preserving its public-spirited goals. A
for-profit can also use stock options to recruit talent.
Mr. Altman collaborated with
Microsoft to raise some $13 billion in capital and run its programs on the tech
titan's servers. Microsoft obtained a reported 49% stake in OpenAI and can
provide its cloud-computing customers access to OpenAI's advanced models -- a
large reason Microsoft's market valuation has soared above $2.8 trillion this
year.
By almost any measure, Mr. Altman
was a successful CEO. Two million developers, including more than 92% of
Fortune 500 companies, use OpenAI, and ChatGPT boasts more than 100 million
weekly active users. But use of its large-language models is growing so fast
that it is struggling to find the computing power to keep up.
At the same time some OpenAI board
members fret about OpenAI's growth. They worry that its rapidly advancing
capabilities could undermine safety and the company's altruistic mission. Two
board members have ties to the so-called effective altruism movement, which
fears that AI could present an existential threat to humanity.
Some board members resisted Mr.
Altman's push to raise capital to expand OpenAI's capabilities. Mr. Altman
reportedly wanted to democratize ChatGPT by letting users create their own
chatbots to perform specialized tasks. The board's high-minded altruists worry
this could cause AI to grow out of control.
But raising more capital is
essential to managing the technology's risks. Safety and growth aren't
incompatible, though AI skeptics treat them as such. All of this is important
context for Mr. Altman's sacking.
Investors say they might have to
write down their stakes, which could make it harder for OpenAI to raise more
capital. The board's apparent failure to consult investors before removing Mr.
Altman doesn't build confidence in its oversight. The corporate good-governance
crowd hails "independent" boards, but OpenAI shows this isn't always
a virtue.
Microsoft has offered Mr. Altman a job, though
this could be a negotiating tactic to get the board to bring him back. More
than 700 of OpenAI's 770 employees have threatened to quit unless the board
resigns and reinstates Mr. Altman, which underscores how much goodwill he had
at the company. What board seeks to destroy its own company?" [1]
Komentarų nėra:
Rašyti komentarą