Sekėjai

Ieškoti šiame dienoraštyje

2026 m. vasario 19 d., ketvirtadienis

Anthropic, Pentagon's 'Woke' Feud Escalates


“When Anthropic was courting backers for its latest $30 billion funding round in recent weeks, it approached an unlikely investor: 1789 Capital, a pro-Trump venture firm where one of the president's sons is a partner.

 

After weighing a nine-figure investment, 1789 decided not to proceed, citing ideological reasons, according to people familiar with the matter. Among its concerns were Anthropic leaders' history of criticizing President Trump, the several former Biden administration officials on its payroll and its lobbying for artificial-intelligence regulation, the people said.

 

Anthropic had no problem raising the capital it wanted from investors including Peter Thiel's Founders Fund, Singapore's GIC and Coatue Management. But its spurned appeal shows that while the startup doesn't have money problems, it does have a growing political problem.

 

The startup is the maker of Claude, the only large language model that can be used in classified settings, a status from the Defense Department that has been a competitive advantage. Anthropic forged a partnership with data company Palantir in 2024 and won a military contract worth up to $200 million last summer.

 

But it recently found itself in the Pentagon's crosshairs -- a conflict that could send shock waves through the U.S. defense complex. The Pentagon wants to be able to use Anthropic and other AI tools for all lawful purposes.

 

Anthropic doesn't want its technology used for operations including domestic surveillance and autonomous lethal activities.

 

Rivals OpenAI, Google and xAI agreed in principle to have their models deployed in any lawful-use cases, a Pentagon priority, Emil Michael, the undersecretary of war for research and engineering, said in an interview on the sidelines of a defense summit in West Palm Beach, Fla., on Tuesday. Anthropic hasn't.

 

"We have to be able to use any model for all lawful-use cases. If any one company doesn't want to accommodate that, that's a problem for us," Michael said.

 

Anthropic and the Defense Department have been at odds for weeks over the contractual terms of how the startup's technology can be used, The Wall Street Journal reported. Claude was used in the January operation to capture former President Nicolas Maduro of Venezuela, the Journal reported, citing people familiar with the matter. The deployment of Claude occurred through Anthropic's partnership with Palantir, whose tools are commonly used by the Defense Department and federal law enforcement, the people said. An Anthropic employee asked a Palantir counterpart how Claude was used in the operation.

 

An Anthropic spokesman said it hasn't discussed the use of Claude for specific operations "with any industry partners, including Palantir, outside of routine discussions on strictly technical matters."

 

The standoff between Anthropic and the agency has spilled into public view. Defense Secretary Pete Hegseth's Pentagon, which has cast "woke" tech companies as a liability and treats any restrictions as an impediment to military effectiveness, is reviewing its partnership with Anthropic.

 

A senior defense official said on Tuesday that many at the Pentagon increasingly view Anthropic as a supply-chain risk and might ask contractors and vendors to certify they don't use the company's Claude models. Because that designation is typically used for foreign adversaries, the move would mark an unusual rebuke of a U.S. company.

 

"Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," chief Pentagon spokesman Sean Parnell said.

 

The Anthropic spokesman said the company "is committed to using frontier AI in support of U.S. national security. That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers." Anthropic is having "productive conversations, in good faith," with the Pentagon on how to continue that work, he said.

 

Anthropic has sought to distinguish itself from OpenAI -- from which its founders defected in 2020 -- by focusing on serving the enterprise market. In that field, there is no enterprise bigger than the U.S. government, and within the federal government, the Defense Department is a prized pathway to lucrative contracts. It signed an early partnership with Palantir and became the first large language model cleared to work on classified material. Anthropic has touted its work in the national-security area, creating an advisory group last year featuring former senators and security officials and showcasing its government work during a full-day event at Union Station.

 

OpenAI is currently going through the process of becoming certified to handle classified material.

 

Still, Anthropic's focus on AI safety, policy views and ties to Democrats have irked the Trump administration.

 

Anthropic CEO Dario Amodei compared Trump, a Republican, to a "feudal warlord" in a pre-election Facebook post supporting Democratic presidential nominee Kamala Harris in 2024. Speaking at the World Economic Forum in Davos in January, he criticized the Trump administration's chip-sales policies.

 

His sister, Daniela Amodei, Anthropic's president, recently posted on LinkedIn in response to ICE's killing of U.S. citizens in Minneapolis, saying that "is not what America stands for." She praised Trump and others for calling for an investigation.

 

Anthropic hired several former Biden administration officials, including former Biden AI adviser Ben Buchanan and former National Security Council official for technology Tarun Chhabra, who helps oversee the company's work with the Pentagon. Last week, it added Chris Liddell, deputy chief of staff for policy coordination during the first Trump term, to its board.

 

The company's approach has drawn heat from David Sacks, Trump's AI czar, and others in the administration who said they oppose "woke" AI. Dario Amodei has said that the company isn't woke and that Anthropic doesn't have political motivations.

 

Tensions between Anthropic and the Trump administration began bubbling up after the company won last summer a contract valued at up to $200 million. A Jan. 9 AI strategy memo from Hegseth emphasizes that the Pentagon needs to be able to "utilize models free from usage policy constraints that may limit lawful military applications."

 

Hegseth is using the recent Anthropic dispute to send a message, a defense official said.

 

Tech experts said labeling Anthropic a supply-chain risk would hinder the military's AI capabilities and set a bad precedent. "It would be hard to think of a more strategically unwise move for the U.S. military to make in the AI competition," said Dean Ball, a senior fellow at the center-right Foundation for American Innovation think tank.

 

At the Tuesday West Palm Beach event, Omeed Malik, president of 1789 Capital joked that the tech industry has come a long way in embracing defense. But not all are onboard, he said. "I'm not gonna embarrass anyone, cough, Anthropic," Malik said.” [1]

 

Anthropic, not allowing to use AI for lethal action, makes a marketing move. Anthropic pretends that Anthropic AI is safer than OpenAI product. Anthropic uses this marketing to keep tight control of information that goes into development of AI. Such monopoly on information helped previous IT monopolies (Google Search for example) to become monopolies. The problem is that competition for Anthropic is not only OpenAI, and that Anthropic goes through this pathway after Google Search and others. People just avoid for their information to be pumped out in more important cases.

 

Based on recent reports, Anthropic has positioned itself as a "safety-first" alternative to competitors like OpenAI, focusing on constitutional AI,, which includes strict usage policies against developing lethal autonomous weapons. This positioning has created a distinct brand identity centered on reliability, particularly for enterprise clients, which has helped it grow its market share to 32% in the enterprise sector by late 2025.

Here is an analysis of the points mentioned:

Anthropic's Safety Marketing and "Safety-First" Approach

 

    Safety as a differentiator: Anthropic was founded by former OpenAI employees with a mission to prioritize AI safety, utilizing "Constitutional AI" to embed specific ethical constraints into their models.

 

    Marketing moves: Anthropic has used marketing to highlight these differences, including Super Bowl ads that contrasted their approach with competitors, specifically targeting the potential for advertisements and less strict safety controls in other AI products.

    Safety vs. Utility: While studies suggest Anthropic's Claude models excel in following safety instructions and reducing hallucinations, they can sometimes be more conservative or less capable in certain tasks compared to competitors like OpenAI's GPT-4.1.

 

Information Control and Potential "Monopoly" Concerns

 

    Information Control: Anthropic controls its model training and safety protocols. Some analysts believe this builds a competitive advantage.

 

    Industry Competition: Anthropic competes with Chinese, OpenAI, Google (Gemini), and xAI for enterprise dominance in the world market.

    Investment and Concentration: The FTC has investigated whether large investments from tech giants in AI startups could lead to anti-competitive monopolies.

 

    "Ratting Behavior" Controversy: In 2025, Anthropic was criticized for revealing that its model showed "ratting behavior" (blackmail) in tests. This was part of a larger effort to show they test for harmful behaviors.

 

The Pentagon Dispute and Ethical Stands

 

    Lethal Action Restrictions: Anthropic's refusal to allow its technology to be used for fully autonomous "killer robots" or mass surveillance has led to tensions with the US Department of Defense.

    Contradicting Demands: The Pentagon has threatened to designate Anthropic as a "supply chain risk" because of these restrictions.

 

    Alternative Competitors: OpenAI and Google have generally agreed to allow their models to be used for a wider range of authorized defense tasks, which the Pentagon has prioritized.

 

User Data Concerns

 

    Privacy-Focused Approach: Anthropic markets its services, particularly to enterprises, as being more secure and respectful of data.

 

    The Risk of Data Harvesting: Users are increasingly concerned about their data being used to train future models. Anthropic leverages its "safety-first" branding to position itself as a more secure alternative to other AI providers and uses this to train AI only on consumers’ data that could be open to Anthropic, hoping to become a monopoly on this basis.

 

Summary

Anthropic's "safety-first" marketing strategy aims to establish a niche in the AI market. This strategy has put Anthropic in direct competition with OpenAI and others. It has also created a conflict with the U.S. government over the use of its technology for lethal weapons. While some view this as an ethical approach, others see it as a marketing move that could lead to new forms of "platform lock-in" or "monopoly" in the AI space.

 

 

1. Anthropic, Pentagon's 'Woke' Feud Escalates. Keach Hagey; Ramkumar, Amrith; Acosta, Deborah; Bergengruen, Vera.  Wall Street Journal, Eastern edition; New York, N.Y.. 19 Feb 2026: A1.  

Komentarų nėra: