Sekėjai

Ieškoti šiame dienoraštyje

2026 m. kovo 28 d., šeštadienis

The Decadelong Feud Shaping the Future of AI --- Personal wounds and power struggles between the leaders of OpenAI and Anthropic are defining how the world encounters the technology

Both Dario Amodei and Sam Altman are idiots. They don't understand that the power of AI comes from data. Chinese AI is open source mostly. People are widely using Chinese AI with their data for spectacular results in factories, fields, military, and their homes. Altman is just a greedy idiot. Amodei is an idiot with ideas. His main idea - put stupid rules for the AI models (called Constitution, nothing less) and do not allow anybody to make significant changes to the models without Amodei's supervision, since removing the Constitution is some mysterious fatal danger. It is very convenient to steal secrets from people and use them to teach AI. Problem is, nobody is stupid enough to give him their secrets. So both Altman and Amodei are doomed.

 

The landscape of artificial intelligence development is currently characterized by intense competition between closed, proprietary models (like those from OpenAI and Anthropic) and a rapid surge in open-source models, particularly from China, which have gained significant global market share by early 2026.

 

Chinese AI and Open-Source Growth

 

    Rapid Adoption: By early 2026, Chinese open-source AI models (e.g., Alibaba's Qwen, DeepSeek) accounted for nearly 30% of global AI usage, a dramatic increase from roughly 1% in late 2024.

    Performance & Cost: These models are often seen as high-performing and more cost-effective than Western proprietary models, leading to widespread adoption in specialized applications.

    Developer Focus: Chinese companies, aided by state-level support, have focused on releasing open-weight models that allow developers to build, train, and deploy AI without restrictions, creating a thriving, decentralized ecosystem.

 

Dario Amodei, Anthropic, and the "Constitution"

 

    Constitutional AI: Dario Amodei, CEO of Anthropic, focuses on safety through a method called Constitutional AI, where models are trained to follow a set of principles designed to ensure safety and ethical behavior.

    Safety Focus: Anthropic was founded on the belief that as models become more powerful, they need strict, embedded rules to avoid catastrophic risks, a perspective that differentiates it from competitors focusing only on capability.

    Industry Stance: While some critics view this approach as limiting innovation, it has garnered support among those concerned about AI safety.

    Security Stance: Amodei has specifically argued against selling AI services to certain entities to prevent enabling surveillance capabilities.

 

Sam Altman and OpenAI

 

    OpenAI's Strategy: Sam Altman, CEO of OpenAI, has focused on creating highly capable models (like GPT-4) and building a strong, centralized enterprise product, ensuring AI safety while rapidly accelerating development.

    Industry Influence: OpenAI is a leader in the field, with models that are widely used across industries, contributing to a "safe alternative" vs. "frontier capability" debate.

 

 

In the meantime they use back stabbing and hype to extract money of investors and government: 

 

“Even before the two men clashed over the Pentagon's use of artificial intelligence, Dario Amodei had been ramping up attacks on his former boss Sam Altman and the direction of OpenAI, a company they built together.

 

In communication with colleagues in recent months, the Anthropic CEO has compared the legal battle between Altman and Elon Musk to the fight between Hitler and Stalin, dubbed a $25 million donation by OpenAI President Greg Brockman to a pro-Trump super political-action committee "evil," and likened OpenAI and other rivals to tobacco companies hawking a harmful product.

 

This brand strategy is known internally as one that casts Anthropic as the "healthy alternative" to its AI rivals. It recently played out in public in the form of a Super Bowl ad campaign slyly attacking OpenAI -- without naming it -- for its decision to include ads in its chatbot.

 

Amodei has built Anthropic's brand around the idea that its approach to AI is fundamentally safer than that of its rivals, particularly OpenAI, which he and his co-founders left in late 2020 over concerns about safety. But the reasons for their departure also include personal wounds that have never healed, including jockeying over more classic corporate issues such as power and credit.

 

These complicated relationships are shaping the AI rollout that is powering a big chunk of the global economy and spawning hopes -- and fears -- about the future of work, and even the meaning of life and the viability of the human race.

 

In recent weeks, the schism spilled into public view when Altman responded to a fight between Anthropic and the Pentagon over how AI is used at war to land OpenAI a deal to perform classified work for the Defense Department. Anthropic, meanwhile, is suing the Trump administration after being barred from doing business with the Pentagon.

 

Amodei wrote a blistering Slack post calling OpenAI "mendacious" and saying "these facts suggest a pattern of behavior that I've seen often from Sam Altman."

 

The divide between the two giant AI companies -- both are valued at more than $300 billion -- has its roots in debates that Amodei and others had in a San Francisco townhouse a decade ago. As both companies hurtle toward an IPO, the philosophical and personal differences between their leadership have metastasized.

 

At an AI event in New Delhi in February, Indian Prime Minister Narendra Modi and the assembled tech leaders closed by joining hands and raising them above their heads. Amodei and Altman opted out, awkwardly touching elbows.

 

This account of the rift between the leaders of Anthropic and OpenAI is based on interviews with current and former employees at both companies and people close to the leaders.

 

The tension in the relationship between the founders of OpenAI and Anthropic began in 2016 at a group house on San Francisco's Delano Avenue.

 

Dario Amodei lived there with his sister, Daniela Amodei. The siblings had grown up in the Bay Area. After getting a doctorate in biophysics, Dario Amodei was working as an AI researcher at Google. Daniela was a young executive at payments startup Stripe.

 

Brockman, a prolific programmer and one of OpenAI's co-founders, was friends with Daniela and began hanging around the house. They had met at Stripe where he was one of the earliest employees after dropping out of Harvard and then MIT. Brockman tried unsuccessfully to recruit Dario for OpenAI's founding team when it launched in 2015.

 

Daniela's fiance, Holden Karnofsky, also lived in the house. Karnofsky was the founder of a philanthropy that promoted effective altruism, a movement that was one of the first communities to take the potential power, and danger, of AI seriously. Through Karnofsky, Brockman became interested in some of those ideas.

 

One day in early 2016, Brockman, Dario and Karnofsky were sitting around the house, debating the right way to build AI. Brockman argued if the technology was indeed going to change everyone's life as much as they all thought it might, its makers needed to inform Americans about what was coming.

 

Dario and Karnofsky said it might not be a good idea to broadcast the most bullish views of what AI might be about to do. Dario argued that when it came to sensitive topics like how fast AI was developing, it would be better to tell the government first.

 

Brockman took away from the exchange the belief the duo didn't want to tell the public about what was happening. Years later, Brockman came to think the exchange illustrated a core difference in the philosophies of OpenAI and Anthropic.

 

By mid-2016, impressed by OpenAI's talent roster, Dario had joined the lab, staying up late with the famously nocturnal Brockman training AI agents to solve videogames. By 2017, one of its early projects, called Universe, which aimed to train AI agents to play games and use computers like humans, was floundering.

 

Musk, OpenAI's then principal financial supporter, asked Brockman and Chief Scientist Ilya Sutskever to make a spreadsheet listing every employee and what contribution they had made -- a classically Muskian precursor to staff cuts.

 

Dario was horrified as he watched his colleagues be fired one by one, which he considered needlessly cruel. In the end, between 10% and 20% of OpenAI's staff of 60 lost their jobs, including one who would go on to co-found Anthropic.

 

In the fall of 2017, Dario hired an ethics and policy adviser who gave a presentation to OpenAI leadership about how the nonprofit lab could be a coordinating entity among other AI companies, and ultimately between those companies and the U.S. government, to get something like an international coordination regime for advanced AI.

 

Brockman saw within the presentation the seed of a fundraising idea: OpenAI could sell artificial general intelligence to governments. When Dario asked which governments, Brockman said it would be to the nuclear powers that made up the UN Security Council so as not to destabilize the world order. The idea was briefly batted around the organization.

 

The notion of selling AGI to rival powers such Russia and China struck Dario as tantamount to treason, and he considered quitting.

 

In early 2018 Musk exited OpenAI. Altman stepped into the leadership void. He met with Dario, and together they agreed that the lab's employees didn't have faith in Brockman and Sutskever's leadership, given the layoffs.

 

Dario agreed to stay so long as Altman promised Brockman and Sutskever wouldn't be in charge. Altman agreed.

 

Dario soon learned Altman had made a promise he felt conflicted with that agreement. During a meeting about reporting structure, Brockman mentioned Altman told him and Sutskever they could fire Altman if they ever thought he was doing a bad job.

 

Brockman recruited Daniela to OpenAI, where she became a jill-of-all-trades, working on engineering management and recruiting. She had married Karnofsky, who was on the OpenAI board, the prior year.

 

Tensions flared after OpenAI researcher Alec Radford laid the groundwork for large language models and the lab's Generative Pre-Trained Transformer, or GPT, series. Brockman wanted a role in this new direction of research, but Dario, who was research director at the time, wanted him nowhere near it. Brockman appealed to Altman, who lobbied to allow Brockman to work on the project.

 

Daniela, who was coleading the language project with Radford, told Brockman he couldn't work on it. When Altman asked her if there was any way to make it work, she offered to step down as head of the project rather than allow Brockman onto it.

 

During a subsequent staff meeting, Dario listed numerous reasons why Brockman shouldn't be allowed to work on the language project, including that Radford didn't want to work with him. Radford was mortified. Brockman and Altman reluctantly agreed to keep Brockman off the language project.

 

Dario's profile at OpenAI grew as he and his team launched GPT-2 and GPT-3, but he didn't always feel properly recognized for his contributions.

 

One such slight came in 2018. Brockman asked Dario to double-check a fact on one of his slides for an important meeting. Dario asked who the slides were for. When Brockman said he and Altman were going to meet former President Barack Obama, Dario got angry he had been left out of the loop.

 

The following year, Dario asked for a promotion. Altman agreed and sent an email to the board saying Dario would report directly to him and receive equal PR treatment to co-founders.

 

Altman's November 2019 email also told the board that Dario agreed "not to denigrate projects he doesn't believe in that others want to bet on."

 

The truce was short-lived. A few months later, the OpenAI leaders had a blowup in the office.

 

Altman called Dario and Daniela into a conference room and accused them of plotting against him by encouraging colleagues to send negative feedback about him to the board. The brother and sister denied it.

 

Altman told them he had heard about it from a top OpenAI executive. Daniela called that executive into the room, who said she had no idea what Altman was talking about. Altman then denied that he had said it, prompting the Amodeis to begin shouting angrily.

 

Toward the end of 2020 -- with Covid having pushed everyone into video chat boxes -- a group coalesced around Dario to break off and form their own company.

 

Altman went over to Dario's house to ask him to stay. Dario said he would accept nothing less than reporting directly to the board. He also said he couldn't work with Brockman.

 

During his final weeks, Dario -- who had become known for his lengthy technical memos -- wrote a long memo outlining two types of AI companies: Market companies and public-good companies.

 

Market companies like OpenAI thought they would make the world better by building and selling products to benefit people, including, eventually, AGI.

 

The public-good company, he argued, would conduct safety research and address various dangers and opportunities of AGI.

 

Dario wrote that the ideal mix would be 75% public good and 25% market.

 

Weeks later, Dario, Daniela and nearly a dozen other employees had left OpenAI. Within five years, they would be lining up banks for Anthropic, racing to IPO before their former employer.” [1]

 

1. EXCHANGE --- The Decadelong Feud Shaping the Future of AI --- Personal wounds and power struggles between the leaders of OpenAI and Anthropic are defining how the world encounters the technology. Keach Hagey.  Wall Street Journal, Eastern edition; New York, N.Y.. 28 Mar 2026: B1.  

Komentarų nėra: