Anthropic has a plan, how to become an AI monopoly - making their models not trainable on our own machines. If we will still use the models, Anthropic will take our private information for any additional training to apply it “safely”. When public sources of information are used up, competitors will be left behind. So safety is a trick here, used for stealing our private information for free. The last thing is for Anthropic to allow the use of Anthropic's AI in a Terminator massively killing people. That would be laughable safety.
“Anthropic said it wouldn't back down in a dispute with the Defense Department over artificial-intelligence guardrails, complicating efforts to reach a compromise ahead of a Friday deadline.
In a Tuesday meeting at the Pentagon, Defense Secretary Pete Hegseth gave Anthropic Chief Executive Dario Amodei until 5:01 p.m. Friday to agree to the military's right to use the technology in all lawful cases.
If Anthropic declines, Hegseth has threatened to invoke the Defense Production Act to make the company do what the military wants or to designate the company a supply-chain risk, impairing its ability to work with other government contractors.
Anthropic has refused to accept the military's proposal and doesn't let users deploy its Claude models in scenarios involving mass domestic surveillance or autonomous weapons.
Amodei reiterated the company's red lines in a public statement Thursday. "We cannot in good conscience accede to their request," he said. The company said the military's latest proposal would effectively undo those guardrails.
Contract language the company received overnight from the Pentagon, "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," an Anthropic spokesman said. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
Emil Michael, the undersecretary of war for research and engineering, said in a post on X that mass surveillance is already illegal under the Fourth Amendment.
Michael said the department "won't have any big tech company decide Americans' civil liberties."
Thursday evening, he posted that Amodei, "wants nothing more than to try to personally control the U.S. Military and is ok putting our nation's safety at risk."” [1]
1. U.S. News: Anthropic Turns Down U.S. on AI Guardrails. Ramkumar, Amrith. Wall Street Journal, Eastern edition; New York, N.Y.. 27 Feb 2026: A3.
Komentarų nėra:
Rašyti komentarą