Sekėjai

Ieškoti šiame dienoraštyje

2023 m. lapkričio 26 d., sekmadienis

How a Philosophy Split Silicon Valley --- Debate over social movement called effective altruism fueled OpenAI blowup.


"Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.

The blowup at OpenAI showed its influence -- and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy.

Coming just weeks after effective altruism's most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age -- and failure to do so could have apocalyptic consequences.

OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a variant of chess called Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.

Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind. They favor safety over speed for AI development. The movement, which includes people who helped shape the generative-AI boom, is insular and multifaceted but shares a belief in doing good in the world -- even if that means simply making a lot of money and giving it to worthy recipients.

Altman, who was fired by the board last Friday, clashed with the company's chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute.

Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University's Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up three of the four votes needed to oust Altman, people familiar with the matter said. The board said he failed to be "consistently candid."

The company announced Wednesday that Altman would return as chief executive and Sutskever, McCauley and Toner would be replaced. Emmett Shear, a tech executive favoring a slowdown in AI development and recruited as the interim CEO, was out.

Altman's dismissal had triggered a company revolt that threatened OpenAI's future. More than 700 of about 770 employees had called for Altman's return and threatened to jump ship to Microsoft, OpenAI's biggest investor. Sutskever said Monday he regretted his vote.

"OpenAI's board members' religion of 'effective altruism' and its misapplication could have set back the world's path to the tremendous benefits of artificial intelligence," venture capitalist and OpenAI investor Vinod Khosla wrote in an opinion piece for The Information.

Altman toured the world this spring warning AI could cause serious harm. He also called effective altruism an "incredibly flawed movement" that showed "very weird emergent behavior."

The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot's surprising capabilities worried people who had previously brushed off concerns, he said.

The movement has spread among the armies of tech-industry scientists, investors and executives racing to create AI systems to mimic and eventually surpass human ability. AI can bring global prosperity, but it first must be prevented from wreaking havoc, according to those in the movement.

Google and other companies are trying to be the first to roll out AI systems that can match the human brain. They largely regard artificial intelligence as a tool to advance work and economies at great profit.

The movement's high-profile supporters include Dustin Moskovitz, a co-founder of Facebook, and Jann Tallinn, the billionaire founder of Skype, who have pledged billions of dollars to effective-altruism research. Before his fall, Bankman-Fried had also pledged billions. Elon Musk has called the writings of effective altruism's co-founder William MacAskill "a close match for my philosophy."

Marc Andreessen, the co-founder of venture-capital firm Andreessen Horowitz, and Garry Tan, chief executive of the startup incubator Y Combinator, have criticized the movement. Tan called it an insubstantial "virtue signal philosophy" that should be abandoned to "solve real problems that create human abundance."

Urgent fear among effective altruists that AI will destroy humanity "clouds their ability to take in critique from outside the culture," said Shazeda Ahmed, a researcher who led a Princeton University team that studied the movement. "That is never good for any community trying to solve any trenchant problem."

The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.

This account of the movement is based on interviews with more than 50 executives, researchers, investors, current and former effective altruists, as well as public talks, academic papers and other published material from the effective-altruism community.

One fall day last year, thousands of paper clips in the shape of OpenAI's logo arrived at the company's San Francisco office. No one seemed to know where they were from, but everybody knew what they meant.

The paper clip has become a symbol of doom in the AI community. The idea is that an artificial-intelligence system told to build as many paper clips as possible might destroy all of humanity in its drive to maximize production.

The prank was done by an employee at crosstown rival, Anthropic, which itself sprang from divisions over AI safety.

Dario Amodei, OpenAI's top research scientist, split from the company, joined by several company executives in early 2021. They started Anthropic, an AI research company friendly to effective altruists.

Bankman-Fried had been one of Anthropic's largest investors and supported the company's mission, which favored AI safety over growth and profits.

The fear of futuristic AI systems hasn't stopped even those worried about safety from trying to build artificial general intelligence or AGI -- advanced systems that match or outdo the human brain.

At OpenAI's holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum's dioramas of stuffed zebras, antelopes and lions.

"Our goal is to make a mankind-loving AGI," said Sutskever, the company's chief scientist.

"Feel the AGI," he said. "Repeat after me. Feel the AGI."

Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

OpenAI recently said it would dedicate a fifth of its computing resources over the next four years to what the company called "superalignment," an effort led by Sutskever.

Frustrated employees said attention to AGI and alignment has left fewer resources to solve more immediate issues such as developer abuse, fraud and nefarious AI uses that could affect the 2024 election. They say the resource disparity reflects the influence of effective altruism.

While OpenAI is building automated tools to catch abuses, it hasn't hired many investigators for the work, according to people familiar with the company. It also has few employees monitoring its developer platform, which is used by more than two million researchers, companies and other developers, the people said.

The company has recently hired someone to consider the role of OpenAI technology in the 2024 election. Experts warn of the potential for AI-generated images to mislead voters.

"We are a values-driven company committed to building safe, beneficial AGI and effective altruism is not one of them," OpenAI said. "We have teams dedicated to researching short-term risks such as cybersecurity, economic impact and aligning systems to human values."

At Google, the merging this year of two artificial intelligence units -- DeepMind and Google Brain -- triggered a split over how effective-altruism principles are applied, according to current and former employees.

DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units.

Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter.

One former employee compared the merger with DeepMind to a forced marriage, "making many people squirm at Brain."

Arjun Panickssery, a 21-year-old AI safety researcher, lives with other effective altruists at Andromeda House, a five-bedroom, three-story home a few blocks from the University of California, Berkeley campus. They host dinners, and visitors are sometimes asked to reveal their P(doom) -- estimates of the chances of an AI catastrophe.

Berkeley, Calif., is an epicenter of effective altruism in the Bay Area, Panickssery said. Some houses designate "no-AI" zones to give people an escape from the subject.

Open Philanthropy's then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy's website. Since 2015, Open Philanthropy, a nonprofit that supports effective-altruism causes -- has given away $327 million to AI-related causes, including $30 million to OpenAI, its website shows.

When Karnofsky was engaged to Daniela Amodei, now Anthropic's president, they were roommates with Amodei's brother Dario, now Anthropic's CEO.

In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to causes recommended by Karnofsky's effective-altruism charity, GiveWell, and to read a 457-page tome by German philosopher Jurgen Habermas beforehand.

"This is necessary context for understanding our wedding," the couple wrote on a website for the event.

The effective-altruism movement dates back roughly two decades, when a group of Oxford University philosophers and those they identified as "super-hardcore do-gooders," were looking for a marketing term to promote their utilitarian version of philanthropy.

Adherents believe in maximizing the amount of good they do with their time. They can earn as much money as possible, then give much of it away to attack problems that government and traditional nonprofits are ignoring or haven't solved. They focus on ideas that deliver the biggest impact or help the largest number of people per dollar spent.

Bankman-Fried, who was convicted this month, said he was building his fortune only to give most of it away.

Beginning around 2014, effective altruists became attuned to the risk of human annihilation by an advanced AI system. The revelation coincided with publication of the book "Superintelligence" by the Swedish philosopher Nick Bostrom, which popularized the paper clip as a symbol of AI danger.

Effective altruists have since formed networks of online communities, where they exchange job advice, argue about philosophy and offer predictions. Affiliated nonprofits and student groups organize local meetups and conferences focused on using reason, economics and mathematics to solve the world's biggest problems.

The gatherings and events, held around the world, are often closed to outsiders. Organizers of a recent effective-altruism conference in New York declined the request of a Wall Street Journal reporter to attend, saying in an email that there was "a high bar for admissions."

The email suggested the reporter would "benefit from spending some more time engaging with the effective altruism community."" [1]

1.  How a Philosophy Split Silicon Valley --- Debate over social movement called effective altruism fueled OpenAI blowup. McMillan, Robert; Seetharaman, Deepa.  Wall Street Journal, Eastern edition; New York, N.Y.. 24 Nov 2023: A.1.

 

Komentarų nėra: