Sekėjai

Ieškoti šiame dienoraštyje

2024 m. rugsėjo 3 d., antradienis

OpenAI, Still Haunted by Its Chaotic Present, Is Trying to Grow Up

 

"The maker of ChatGPT is struggling to transform itself into a profit-driven company while satisfying worries about the safety of artificial intelligence.

OpenAI, the often troubled standard-bearer of the tech industry’s push into artificial intelligence, is making substantial changes to its management team, and even how it is organized, as it courts investments from some of the wealthiest companies in the world.

Over the past several months, OpenAI, the maker of the online chatbot ChatGPT, has hired a who’s who of tech executives, disinformation experts and A.I. safety researchers. It has also added seven board members — including a four-star Army general who ran the National Security Agency — while revamping efforts to ensure that its A.I. technologies do not cause serious harm.

OpenAI is also in talks with investors such as Microsoft, Apple, Nvidia and the investment firm Thrive for a deal that would value it at $100 billion. And the company is considering changes to its corporate structure that would make it easier to attract investors.

The San Francisco start-up, after years of public conflict between management and some of its top researchers, is trying to look more like a no-nonsense company ready to lead the tech industry’s march into artificial intelligence. OpenAI is also trying to push last year’s high-profile fight over the management of Sam Altman, its chief executive, into the background.

But interviews with more than 20 current and former OpenAI employees and board members show that the transition has been difficult. Early employees continue to leave, even as new workers and new executives pour in. And rapid growth hasn’t resolved a fundamental question of what OpenAI is supposed to be: Is it a cutting-edge A.I. lab created for the benefit of humanity, or an aspiring industry giant dedicated to profits?

Today, OpenAI has more than 1,700 employees, and 80 percent of them started after the release of ChatGPT in November 2022. Mr. Altman and other leaders have led the recruitment of executive hires, while the new chairman, Bret Taylor, a former Facebook executive, has overseen the expansion of the board.

“While start-ups must naturally evolve and adapt as their impact grows, we recognize OpenAI is navigating this transformation at an unprecedented pace,” Mr. Taylor said in a statement emailed to The New York Times. “Our board and the dedicated team at OpenAI remain focused on safely building A.I. that can solve hard problems for everyone.”

A number of the new executives played prominent roles in other tech companies. Sarah Friar, OpenAI’s new chief financial officer, was the chief executive of Nextdoor. Kevin Weil, OpenAI’s new chief product officer, was the senior vice president of product at Twitter. Ben Nimmo led Facebook’s battle against deceptive social media campaigns. Joaquin Candela oversaw Facebook’s efforts to reduce the risks of artificial intelligence. Now, the two men have similar roles at OpenAI.

OpenAI also told employees on Friday that Chris Lehane, a veteran of the Clinton White House who had a senior role at Airbnb and joined OpenAI this year, would be its head of global policy.

But of 13 people who helped found OpenAI in late 2015 with a mission to create artificial general intelligence, or A.G.I. — a machine that can do anything the human brain can do — only three remain. One, Greg Brockman, the company’s president, has taken a leave of absence through the end of the year, citing the need for time off after nearly a decade of work.

“It is pretty common to see these kinds of additions — and also subtractions — but we are under such bright lights,” said Jason Kwon, OpenAI’s chief strategy officer. “Everything becomes magnified.”

Since its earliest days as a nonprofit research lab, OpenAI has struggled with arguments over its goals. In 2018, Elon Musk, its primary backer, departed after a dispute with its other founders. In early 2022, a group of key researchers, worried that commercial forces were pushing OpenAI’s technologies into the marketplace before proper guardrails were in place, left to form a rival A.I. outfit, Anthropic.

Driven by similar concerns, OpenAI’s board suddenly fired Mr. Altman late last year. He was reinstated five days later.

OpenAI has split from many of the employees who questioned Mr. Altman and from others who were less interested in building a regular tech company than in doing advanced research. Echoing complaints from other employees, one researcher quit over OpenAI’s efforts to claw back OpenAI shares from employees — potentially worth millions of dollars — if they publicly spoke out against it. OpenAI has since reversed the practice.

OpenAI is driven by two forces that are not always compatible.

On one hand, the company is driven by money — lots of it. Annual revenues have now topped $2 billion, according to a person familiar with its income. ChatGPT has more than 200 million users each week — twice the number from nine months ago. It is unclear how much the company is spending each year, though one estimate puts the figure at $7 billion. Microsoft, which is already OpenAI’s largest investor, earmarked $13 billion toward the A.I. company.

But OpenAI is considering making big changes to its structure as it looks for more investments. Right now, the board of the original OpenAI — formed as a nonprofit — controls the organization, without official input from investors. As part of its new funding discussions, OpenAI is considering changes that would make its structure more appealing to investors, according to three people familiar with the negotiations. But it has not yet settled on a new structure.

OpenAI is also driven by technologies that worry many A.I. researchers, including some OpenAI employees. They argue that these technologies could help spread disinformation, drive cyberattacks or even destroy humanity. That tension led to a blowup in November, when four board members, including the chief scientist and co-founder Ilya Sutskever, removed Mr. Altman.

After Mr. Altman reasserted his control, a cloud hung over the company. Dr. Sutskever had not returned to work.

(The Times sued OpenAI and Microsoft in December for copyright infringement of news content related to A.I. systems.)

With another researcher, Jan Leike, Dr. Sutskever had built OpenAI’s “Superalignment” team, which explored ways of ensuring that its future technologies would not do harm.

In May, Dr. Sutskever left OpenAI and started his own A.I. company. Within minutes, Dr. Leike also left, joining Anthropic. “Safety culture and processes have taken a back seat to shiny products,” he said. Dr. Sutskever and Dr. Leike did not respond to requests for comment.

Others have followed them out the door.

“I’m still afraid that OpenAI and other A.I. companies don’t have an adequate plan to manage the risks of the human-level and beyond-human-level A.I. systems they are raising billions of dollars to build,” said William Saunders, a researcher who recently left the company.

As Dr. Sutskever and Dr. Leike departed, OpenAI moved their work under another co-founder, John Schulman. While the Superalignment team had focused on harms that might happen years in the future, the new team explored both near- and long-term risks.

At the same time, OpenAI hired Ms. Friar as its chief financial officer (she previously held the same role at Square) and Mr. Weil as its chief product officer. Ms. Friar and Mr. Weil did not respond to requests for comment.

Some former executives, who spoke on the condition of anonymity because they had signed nondisclosure agreements, expressed skepticism that OpenAI’s troubled past was behind it. Three of them pointed to Aleksander Madry, who once led OpenAI’s Preparedness team, which explored catastrophic A.I. risks. After a disagreement over how he and his team would fit into the larger organization, Dr. Madry moved to a different research project.

As some employees departed, they were asked to sign legal papers that said they would lose their OpenAI shares if they spoke out against the company. This incited new concerns among the staff, even after the company revoked the practice.

In early June, a researcher, Todor Markov, posted a message on the company’s internal messaging system announcing his resignation over the issue, according to a copy of the message viewed by The Times.

He said OpenAI’s leadership had repeatedly misled employees about the issue. Because of this, he argued, the company’s leadership could not be trusted to build A.G.I. — an echo of what the company’s board had said when it fired Mr. Altman.

“You often talk about our responsibility to develop A.G.I. safely and to distribute the benefits broadly,” he wrote. “How do you expect to be trusted with that responsibility?”

Days later, OpenAI announced that Paul M. Nakasone, a retired U.S. Army general, had joined its board. On a recent afternoon, he was asked what he thought of the environment he had stepped into, given that he was new to the A.I. field.

“New to A.I.? I am not new to A.I.,” he said in a phone interview. “I ran the N.S.A. I have been dealing with this stuff for years.”

Last month, Dr. Schulman, the co-founder who helped oversee OpenAI’s new safety efforts, also resigned from the company, saying he wanted to return to “hands-on” technical work. He also joined Anthropic.

“Scaling a company is really hard. You have to make trade-off decisions all the time. And some people might not like those decisions,” Mr. Kwon said. “Things are just a lot more complicated.”" [1]


This is a bait-and-switch company.  It promised to do no evil, attracting many talented people. Now it started rushing to big money, no evil be damned. Talented people are leaving in droves. Bait-and switch is generally illegal.

  
1. OpenAI, Still Haunted by Its Chaotic Past, Is Trying to Grow Up. Metz, Cade; Isaac, Mike.  New York Times (Online) New York Times Company. Sep 3, 2024.

Komentarų nėra: