Based on reports from early 2026, a significant conflict is emerging regarding the development and deployment of advanced Artificial Intelligence (AI) by Western nations, particularly the United States. This "fight about the future of war" involves a push to integrate AI capable of surpassing human intelligence with functions for mass surveillance and autonomous lethal operations
.
The Goal: "AI-First" Military Dominance
Rapid Acceleration: In January 2026, the US Department of Defense launched a strategy to become an "AI-first" fighting force, aiming to secure an undisputed lead in AI-enabled warfare.
Push for Autonomy: The Pentagon has pressured AI companies, such as Anthropic, to remove safeguards that prohibit their AI from being used for autonomous kill operations and mass surveillance.
The "WarClaude" Initiative: Reports indicate the US government is pushing to develop "WarClaude," a version of AI for military use capable of identifying, tracking, and killing targets without human intervention.
Strategic Rationale: Proponents argue that developing AI-enabled lethal autonomous weapons (LAWS) is essential to maintain global primacy and counter similar advancements by adversaries like China and Russia.
The Conflict: Ethical Guardrails vs. National Security
AI Company Resistance: Major AI companies, notably Anthropic, have refused to remove restrictions that keep "humans in the loop" for life-and-death decisions, raising the possibility of contract termination and conflict with the government.
The "Kill Switch" Debate: Experts warn that pushing to build superintelligent AI without proper safety measures could lead to catastrophic risks, with some, such as Eliezer Yudkowsky and Nate Soares, arguing that it could lead to humanity being "wiped off the map".
Surveillance Concerns: Critics warn that AI-powered surveillance, intended for national security, could be turned inward on the public, undermining democratic governance and creating a "panopticon".
The "AI Doomer" Perspective: Some researchers argue that the race for superintelligence is a "mad race" that must be halted, as the risk of losing control of a superhuman AI is too high.
Future of War Implications
Unreliable Systems: Experts like Anna Hehir warn that AI is currently too unreliable and unpredictable to be used in high-stakes, lethal, and autonomous, scenarios.
Lowering the Threshold for War: Increased automation in warfare could make conflicts easier to enter and harder to control, potentially resulting in "flash wars".
Ethical and Legal Hurdles: The use of AI in conflict complicates international humanitarian law, particularly regarding the accountability of machines when they cause unintended casualties.
As of March 2026, the debate continues over who controls advanced AI—private companies with ethical guardrails or the government in the name of national defense.
“If there’s a war unfolding somewhere in 2026 — and there are currently several — there’s a good chance that artificial intelligence is playing a part in it.
A.I. is being used in fighting in Iran and Ukraine. The U.S. used it when it captured the leader of Venezuela. Israel used it during its war in Gaza.
And the use of A.I. on the battlefield is only just getting started. That’s why another battle that unfolded last week between the Trump administration and Anthropic, an American A.I. company, is so important. I asked my colleague Julian E. Barnes what the fight means for the future of warfare, for Americans and for the world.
Anthropic is one of the world’s leading A.I. companies. Its Claude model has been widely used by the Pentagon to collect intelligence, identify targets, map out operations and more.
But the fight last week wasn’t about how A.I. is currently being used. It was about how it could be used.
Anthropic’s contract set out two restrictions: The government could not use its technology for mass surveillance of U.S. citizens. And it could not use Claude with autonomous weapons that kill without human involvement.
The Pentagon balked. It said it didn’t want to use A.I. for domestic surveillance or autonomous killer robots. But it refused to let a private company put restrictions on how the military uses its product.
The standoff has been a messy mix of contract dispute and culture war, a focal point for fears about A.I. and worries about U.S. competitiveness in the global race for A.I. pre-eminence.
On Friday, after negotiations failed, President Trump ordered all federal agencies to stop using Anthropic. The Pentagon also labeled it a “supply-chain risk to national security,” potentially barring any military contractor from doing business with the firm.
Julian, who writes about intelligence and national security, was covering the fight between Anthropic and the Pentagon before he started covering the war in Iran. (He’s been busy.)
Julian, how much is A.I. already integrated into warfare and national security?
It’s totally integrated. The use of A.I. in warfare is no longer theoretical. A.I. is on the battlefield. We do not believe that large language models are being used to command drones or fire weapons yet. But A.I. is deeply embedded in the process of collecting intelligence and using it to shape strategic decisions.
And what exactly is Anthropic worried about?
One of their concerns was that the government could use A.I. to analyze commercially available data on U.S. citizens. Our web browsing data, our telephone metadata — commercial firms can scoop that up and use A.I. to figure out where you’ve been, what you’ve visited, what you purchased.
Anthropic is also worried about this idea of killer drones.
And why is the government objecting to these red lines?
The U.S. says it will always have a human in the loop when artificial intelligence is making decisions around whether or not to kill someone. But there are problems that go along with that, because whoever can observe, think and decide faster is going to win in a battle, and humans can slow that process down.
The central question here is the role of humans in future warfare. And that will probably look very different than today. We still need them, but we haven’t decided what their role is going to be. And that makes it hard to write A.I. rules in advance.
What does the law have to say about A.I. and warfare?
The Pentagon says that the existing laws that govern the conduct of war should be enough. Because the principles of ethical warfare are the same if I’m dropping a bomb or using software to improve my targeting.
But Anthropic says A.I. is not like other weapons. Other weapons are confined by their hardware. This plane flies to this spot, and drops a bomb. This plane flies to this spot, and shoots another plane.
Large language models are different. You can have them analyze data for insights. You can have them suggest places to bomb. You could have them design a cyberattack. Their use constantly evolves.
Anthropic’s line is that this is special technology and we need to have special guardrails on it.
So if we boil it down, what is this fight really about?
It’s about politics and about principle — on both sides.
Anthropic wants to show that it’s a responsible, safety-minded company. That’s their brand.
And the Pentagon is saying: This is the woke A.I. company! We’re cracking down on woke! That’s the MAGA brand.
As for principles, the Pentagon is saying there is one standard for all companies who do business with us: We are constrained by the lawful use of this technology, not by any conditions dictated by private companies.
And Anthropic is saying that existing laws are not fit to regulate A.I.
How does this fit into the bigger A.I. race between the U.S. and China? I assume Chinese companies don’t ask the government to put in place guardrails.
Definitely not. Chinese law commands Chinese companies to give up their technology to the state. That’s also why the threat of China hangs over what just happened in Washington. Because the U.S. believes that if there is a war with China over Taiwan, the opening battle will be a battle of drones over the Taiwanese Strait. The drones that can move and decide faster are going to win.
No guardrails also means the Chinese government has asked A.I. companies to develop mass disinformation tools. It has used A.I. for mass surveillance. It has used large language models to identify dissidents. So how China has used A.I. is the actual nightmare scenario that Anthropic is warning about.
Related: OpenAI, Anthropic’s primary rival, signed a deal with the Pentagon immediately after Trump’s order. On Monday, OpenAI said it was amending the contract to say its A.I. systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”” [1]
1. The World: A fight about the future of war. Bennhold, Katrin. New York Times (Online) New York Times Company. Mar 4, 2026.
Komentarų nėra:
Rašyti komentarą