It's not about vibes. It's about money. Amodei wants to use safety features as selling point to keep high premium prices on his product. Dustup with government is a good advertisement campaign. The same logic uses Apple with security of customers' information.
Dario Amodei, CEO of AI startup Anthropic, has explicitly adopted a strategy of using AI safety as a core, premium-differentiated selling point, which has recently culminated in a high-profile dispute with the U.S. government. Similar to Apple's branding strategy around user privacy and data security, Anthropic is framing its refusal to remove safety guardrails as a matter of ethical principle, transforming a potential "dustup" with the Pentagon into a brand-defining moment.
Amodei's Safety-First Strategy
Safety as a Premium Product: Anthropic, founded on the principle of AI safety, uses its "Constitutional AI" method as a key differentiator to position its products, such as the Claude chatbot, as safer and more reliable than competitors, allowing for higher pricing in the enterprise market.
The "Dustup" with the Government: The U.S. Department of War (formerly Defense) threatened to deem Anthropic a "supply chain risk" and revoke contracts unless they removed safety safeguards allowing for autonomous weapons and mass surveillance.
Refusal as Advertisement: Amodei refused these demands, framing the conflict as a defense of American values against a government push for unchecked AI, arguing that they cannot "in good conscience" provide technology that puts people at risk. This public battle serves as a strong endorsement of their "safe" brand, distinguishing them from rivals who may move faster with fewer restrictions.
Similarities to Apple's Strategy
Privacy as a "Product": Similar to how Apple positions user data privacy as a premium feature of its devices to justify higher prices, Amodei is positioning AI safety as a critical component of AI services.
Standing Up to Authority: Both companies use conflicts with government or industry norms (e.g., Apple refusing to create backdoors for law enforcement) to build trust with customers who are increasingly concerned about the misuse of technology.
Trust over Speed: While competitors focus on raw speed or capability, both Apple and, increasingly, Anthropic, sell a "responsible" or "secure" alternative, which appeals to risk-averse enterprise clients.
“In his first face-to-face meeting with Defense Secretary Pete Hegseth, Anthropic Chief Executive Dario Amodei made his case about the risks of AI-controlled autonomous weapons.
Hegseth didn't want to hear it, even from a CEO whose company developed AI tools that have become instrumental for the military.
"No CEO is going to tell our war fighters what they can and cannot do," Hegseth said after cutting off Amodei midsentence in the meeting on Feb. 24, according to people familiar with the matter.
The rupture between the two men, with sharply contrasting personalities and worldviews, was never resolved. And now the Trump administration, which has championed the speedy rollout of AI as essential to economic growth and national security, finds itself at loggerheads with a homegrown giant of the industry.
"This is a fight about vibes and personalities masquerading as a policy dispute," said Michael Horowitz, a former Defense Department official who worked on AI policy.
The dispute comes down to a "breakdown in trust between Anthropic and the Pentagon, where Anthropic doesn't trust that the Pentagon knows enough to use their technology responsibly and the Pentagon doesn't trust that Anthropic will be willing to work on important use cases that it needs," he said.
Amodei, who over a year earlier had assured anxious employees that the company's contract with the U.S. military was mostly about paperwork, has more recently framed the clash with the Pentagon as one with grave implications for the future of modern warfare and even society at large.
On Friday, President Trump directed all federal agencies to stop working with Anthropic and attacked the company's executives for being "leftwing nutjobs."
Later that day, after a deadline passed for Anthropic to agree to a deal about how its tools could be used, Hegseth designated the company a supply-chain risk. Rarely used against a U.S. company, the move -- if it survives Anthropic's expected legal challenge -- could impair Anthropic's ability to work with other government contractors including Lockheed Martin, Amazon and Microsoft. It threatens the relationships that have made it one of the world's most valuable startups.
In an ironic twist, minutes before his post, Trump authorized strikes on Iran -- attacks that were planned with the involvement of Anthropic's Claude models, The Wall Street Journal reported.
Claude also played a role in the January military operation that captured Venezuelan President Nicolas Maduro and has been used for war gaming and mission planning, people familiar with the matter said.
Anthropic for years has been the most vocal AI company advocating for guardrails to ensure the technology is used safely. That stance at times has frustrated administration officials, who have embedded Anthropic's tools widely across the government even as they were bothered about the company wanting to exert control over how they are used.
Earlier this year, Anthropic effectively banned the use of the word "pathogen" in model prompts as part of its safeguards against AI creating a bioweapon on its unclassified systems used by many agencies, people familiar with the matter said. The ban made it difficult for employees at the Centers for Disease Control and Prevention to use the AI tool. It took weeks to get workers permission to circumvent the ban.
Emil Michael, the undersecretary of defense for research and engineering, last week called Amodei a liar for mischaracterizing the Pentagon's offer and accused him of trying to play God.
By Monday, agencies including the Treasury Department and Department of Health and Human Services were telling employees that their AI tools would no longer work with Claude.
At the core of the conflict is a question: who should ultimately control how cutting-edge AI tools are deployed in conflict and society writ large?
Amodei and Hegseth approach the question differently. A bespectacled researcher who often twirls his curly hair, Amodei authors lengthy documents philosophizing about the importance of AI safety and is known for his deliberate approach to problem solving. He has been a vegetarian since childhood.
Hegseth is a former Fox News host with several tattoos tied to his Christian faith and military service. Videos of Hegseth lifting weights frequently circulate on social media and he played a role in President Trump's decision to rename the Defense Department the Department of War.
As of late Sunday, the Pentagon hadn't formally issued the designation against Anthropic, raising the possibility that a deal could be reached.
In recent days, as Anthropic's clash with the Pentagon intensified, it lost its status as the only AI company approved for use in classified settings. Elon Musk's xAI recently reached an agreement to be deployed in such settings and late on Friday, OpenAI announced that it had as well.
The Anthropic fight was never personal and was always about the Defense Department wanting to use its AI tools for all lawful purposes, a Pentagon official said.
Amodei co-founded Anthropic in 2021 after leaving OpenAI because he felt the company was prioritizing business goals over AI safety. He is known to some of his employees as "Professor Panda." Amodei committed to donating 80% of his founding stock to charity alongside his co-founders, a stake now worth billions of dollars.
Amodei chose not to release an early version of Claude in the summer of 2022, fearing that it would start a dangerous technology race. OpenAI released ChatGPT a few weeks later, forcing Anthropic to play catch-up.
While Amodei cemented a reputation for his methodical approach to AI development, Michael helped build Uber as chief business officer when the company was known for aggressively taking on competitors and regulators. He went on to work with dozens of startups and championed efforts to integrate technology into Pentagon operations.
Michael had a long relationship with OpenAI's Sam Altman, helping him sell his first startup in 2012. They also worked in the same startup ecosystem while Altman was leading incubator Y Combinator from 2014 to 2019.
While OpenAI pulled ahead with consumers, Anthropic's Claude tool developed a devout following among coders. It found success nabbing enterprise contracts and raised capital at a blistering clip. It was valued at $380 billion after its most recent fundraising round.
Big investments from Amazon proved particularly beneficial -- and became an entryway into the Pentagon. In November 2024, during the final days of the Biden administration, Anthropic and data-mining firm Palantir announced a partnership with Amazon that would give U.S. intelligence and defense agencies access to Claude models.
The partnership gave Anthropic a fast track to be used in classified settings through Palantir's systems and made Anthropic the first model developer available for the most sensitive Pentagon operations.
Some Anthropic employees questioned how the technology would be used. Might the tools be used in operations in which people were killed?
Amodei reassured staff. In a late 2024 all-hands meeting, he likened it to helping the government finish paperwork and back-office functions more quickly.
Even as Anthropic gained momentum, it was rankling administration officials at the start of Trump's second term. Amodei's dire public warnings on the dangers of AI made him one of the few AI executives out of step with Trump. In late May, Amodei warned that AI could destroy about half of all entry-level white-collar jobs.
Trump's AI czar David Sacks called Anthropic "committed leftists" on the podcast he co-hosts, citing its links to organizations that are Democratic donors. Anthropic had previously hired several Biden-era officials. Amodei called Trump a "feudal warlord" before the 2024 election.
Still, in July Anthropic announced a contract worth up to $200 million from the Pentagon. It also inked an agreement with the central procurement arm of the federal government to let other agencies use Claude.
Around the same time, Sacks and other officials worked on an executive order targeting "woke AI," a move that was widely seen as going after Anthropic.
Late last year, the Pentagon began discussing changing its contracts with AI companies so that they could use the technology in all lawful use cases. Anthropic's hesitance to give the blanket approval and desire to maintain red lines banning uses for mass domestic surveillance and autonomous weapons frustrated some administration officials, people familiar with the conversations said.
The clash between Anthropic and the Pentagon intensified in January, with the Journal and other media outlets reporting that its contract could be canceled.
During a Jan. 12 speech at Musk's SpaceX, Hegseth said Grok would join the Pentagon's military AI platform. He made apparent jabs at Anthropic, saying, "We will not employ AI models that won't allow you to fight wars."
The Defense Department was negotiating but Anthropic continued to hold firm on its red lines. It wanted the bans explicit despite Pentagon assurances it wouldn't conduct those operations or break the law.
Around the same time, media outlets reported that when Michael asked Amodei a hypothetical about whether the Pentagon could use Claude to take out missiles approaching the U.S., the Anthropic CEO said Defense Department officials should check with the company first. The response reportedly angered the Trump administration. Anthropic denied that Amodei said that.
Suspecting they were at an impasse, Pentagon officials accelerated discussions with Anthropic's main rival. The weekend of Feb. 21, Michael reached out to Joe Larson, the head of government at OpenAI, to see if the company could begin the process of becoming certified to get deployed on classified systems. Officials were already working to secure that status for Musk's Grok.
At their Feb. 24 meeting at the Pentagon, Hegseth complimented Amodei on the quality of the company's models while reiterating his threat to label Anthropic a supply-chain risk. He also lobbed an even bigger threat: invoking the Defense Production Act, a Cold War-era law that gives the government control of key industries, to force Anthropic to do its bidding. The defense secretary gave Amodei until 5:01 p.m. Friday to agree to the military's right to use the technology in all lawful cases.
On Wednesday night, the Defense Department sent over new contract language.
That day, OpenAI's Altman reached out to Michael. Altman felt strongly that the risk of either triggering the Defense Production Act or designating Anthropic a supply-chain risk wasn't good for the country, according to people familiar with their conversation.
But he also saw an opportunity for OpenAI. The company proposed a contract that used language from existing law to keep the guardrails against mass domestic surveillance and autonomous weapons, while not asking the Pentagon to alter its usage policy for the company.
As the Friday deadline approached, Trump said that he was directing federal agencies to cease working with Anthropic. But the parties were still negotiating.
At 5:01 p.m. Michael called Amodei, who didn't answer. Michael then talked to another top Anthropic executive offering a deal that Anthropic felt would have left the door open to the collection or analysis of large amounts of data on U.S. residents, people familiar with the matter said.
Some inside Anthropic had thought an agreement was nearly done before that final proposal, which was rejected. Company officials had recently found out that they were in line to win a Pentagon contract to work on using AI to control drones but were out of the running because of the ongoing negotiations. Michael has disputed the company's characterization of the offer.
Moments later, Hegseth posted on social media that he was designating Anthropic a supply-chain risk.
The standoff appears to be boosting Anthropic's popularity among consumers. By Sunday, Claude had topped ChatGPT to become the most downloaded app in Apple's app store.” [1]
The scandal is good for getting money. Always.
1. Culture Clash Fueled Pentagon Breakup with Anthropic --- Trust broke down between Defense Secretary Hegseth and tech CEO Amodei; 'a fight about vibes'. Ramkumar, Amrith; Keach Hagey; Weisgerber, Marcus. Wall Street Journal, Eastern edition; New York, N.Y.. 03 Mar 2026: A1.
Komentarų nėra:
Rašyti komentarą