The United States is actively developing and investing in lethal autonomous weapon systems (LAWS) capable of selecting and engaging targets without human intervention, but policy generally emphasizes keeping humans in the loop.
While the U.S. does not currently have fully independent "killer robots" in wide deployment, it has fielded autonomous defensive systems and is developing AI-enabled drone swarms, with policies allowing for potential fully autonomous operations.
Key Findings on U.S. Autonomous Weapons:
Policy and Development: U.S. policy (DoD Directive 3000.09) does not ban autonomous weapons but mandates that they be designed to allow commanders and operators to exercise "appropriate levels of human judgment".
"Human-in-the-Loop" Debate: While the goal is often "supervised autonomy" some analysts argue that the directive does not technically require a human to be in the loop for every decision, leaving room for fully autonomous actions.
Existing Capabilities: Technologies like advanced smart mines (e.g.Quickstrike) and loitering munitions, which can act autonomously once deployed, are already in use.
Future Goals: The Pentagon is pursuing "attritable" autonomous systems (thousands of, for example, AI-enabled drones) that can operate in complex, degraded environments.
Drivers for Development: The U.S. is pushing for these technologies due to competition with nations like China and Russia, aiming to maintain a military edge.
While some systems, such as Fortem Technologies' DroneHunter, are designed to require human authorization, American technology is rapidly moving toward fully autonomous capabilities.
The Push Toward Full Autonomy
Replicator Initiative: The U.S. Department of Defense is accelerating the procurement of thousands of autonomous drones (Replicator-2) to counter adversarial drone swarms.
Autonomous Defense: Systems like Fortem's DroneHunter F700 can now operate in "fully autonomous track and defeat" modes, utilizing AI to select tactics (pursuit, attack, defense) without human intervention.
Military Logic: The push is driven by the need to operate at the speed of AI-driven threats on the battlefield, where human-in-the-loop systems may be too slow.
In such case Anthropic’s attempts of keeping AI safety look futile and childish:
“The federal government will stop working with Anthropic and designate the artificial-intelligence company a supply-chain risk, a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon.
"I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and will not do business with them again!" President Trump said Friday in a social-media post.
The Defense Department and other agencies using Anthropic's Claude models will have a six-month phaseout period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition.
Trump's announcement came shortly before the Pentagon's Friday afternoon deadline for Anthropic to agree to let the military use its models in all lawful-use cases, a concession the company had refused to make. "We cannot in good conscience accede to their request," Anthropic Chief Executive Dario Amodei said on Thursday.
Trump, a Republican, and administration officials have attacked the company for that stance, while also taking exception to its push for AI regulations and its links to organizations that are big Democratic donors.
Shortly after the deadline, Defense Secretary Pete Hegseth said on X that he is designating the company a supply-chain risk, impairing its ability to work with other government contractors.
Hegseth had previously threatened the move. The designation is normally used for companies from foreign adversaries like China that pose security threats.
Depending how aggressively the Pentagon enforces the restriction, it could mean other companies working with the department will have to show that they don't use Claude at all, a threat that could hit swaths of Anthropic's business.
Anthropic said it would challenge any supply-chain risk designation in court. "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said.
AI and security experts said the administration's actions could have a chilling effect for other businesses considering working with the Pentagon or other agencies.
"This is a dark day in the history of American AI. The message sent to the business community and to countries around the world could not be worse," said Dean Ball, a former Trump administration AI adviser who is now a senior fellow at the Foundation for American Innovation, a center-right think tank. He had called for the administration not to use the supply-chain risk designation.
Ball said that Anthropic partners and investors including Nvidia, Amazon.com and Google might have to divest from the company or stop working with it if they want to continue working with the Pentagon. Some legal analysts have said it only applies to the work companies do with the military, not all private-sector business.
Anthropic said the designation would only affect customers in their Pentagon work.
"This is about Anthropic not being one of the favored companies, and they're going to pay the price for not bowing down and not signing on the dotted line," said Jack Shanahan, who oversaw AI efforts in the military during the first Trump administration.
"It basically says to any startup that's contemplating dipping its toe in the Defense Department water that they're going to grab your ankle and pull you all the way in," said Gregory Allen, a senior adviser focused on AI at the Center for Strategic and International Studies. Allen, who previously worked on the Defense Department's AI strategy, called the move "completely absurd." "This is going to be a huge deterrent to companies thinking about serving the defense industry," he said.
Anthropic's contract with the Pentagon is worth as much as $200 million.
The company had sought to have protections written into the deal against the use of its models for autonomous weapons and mass domestic surveillance. The military has said it should have final authority to decide how the technology can be used within the law.
The Defense Department's moves are the latest example of the Trump administration's unusual approach to involving itself in the private sector. The administration has taken equity stakes in companies it deems critical in areas such as semiconductor production and critical minerals and struck deals to take a percentage of chip sales to China.
Even absent the supply-chain risk designation, broadening the clash to include all federal agencies takes the Anthropic fight to a much larger scale than its spat with the Pentagon. The company's Claude models are being used across the government to do everything from summarize documents to analyze data. Pausing those functions to swap in alternatives could set back the government's AI and modernization efforts, AI experts have said.
The General Services Administration said it was removing Anthropic from its product offerings to government agencies.
Pentagon officials have criticized Anthropic for seeking control over the military's operations. After Claude was used in the Venezuela raid to capture former President Nicolas Maduro in January, an Anthropic employee asked a counterpart at data-mining firm Palantir, its partner, about how the technology was used. Defense Department officials learned about the exchange and viewed it as a challenge to the department's prerogative to have final say in operational matters.
Anthropic has said the Palantir question was part of routine technical discussion between partners.
The new moves by the administration come as other AI companies grapple with their own red lines. OpenAI Chief Executive Sam Altman told staff Thursday that the company was working with the Defense Department to see if its models could be used in classified settings while maintaining the same safety guardrails Anthropic has, The Wall Street Journal reported.
The Pentagon has said it didn't want to use Claude for autonomous weapons, mass surveillance or other illegal uses, but that it needs to have full control over the technology it deploys.” [1]
1. U.S. Halts Use of Anthropic AI After Tension Over Guardrails. Ramkumar, Amrith. Wall Street Journal, Eastern edition; New York, N.Y.. 28 Feb 2026: A1.
Komentarų nėra:
Rašyti komentarą