Associate Sponsors

Co-sponsor

Anthropic rejects latest Pentagon offer over AI safeguards, escalating feud

Anthropic said that while the Pentagon's latest proposal fell short, the company continues to negotiate with defense officials and remains committed to working with the military

Dario Amodei, Chief Executive Officer and co-founder of Anthropic
Amodei said he hopes the Defense Department will revisit its current position of only working with contractors who will agree to an all-lawful-use standard | Image: Bloomberg
Bloomberg
4 min read Last Updated : Feb 27 2026 | 8:28 AM IST
By Maggie Eastland and Katrina Manson
 
Anthropic PBC rejected the Pentagon’s latest offer in a dispute over safeguards around the use of its artificial intelligence technology by the US military, escalating the standoff a day before a government deadline for the company to drop its restrictions or face severe consequences. 
“These threats do not change our position: we cannot in good conscience accede to their request,” Anthropic Chief Executive Officer Dario Amodei said in a statement Thursday.
 
A company spokesperson said that new contract language from the Pentagon failed to satisfy the firm’s desire for curbs on military use of its AI tools. The company continues to insist on two specific restrictions: It doesn’t want its technology used for surveillance of US citizens or for autonomous lethal strikes without a human in the loop. 
Anthropic said that while the Pentagon’s latest proposal fell short, the company continues to negotiate with defense officials and remains committed to working with the military. A Pentagon spokesperson had no immediate comment.
 
The dispute centers on the company’s insistence that guardrails accompany military use of its Claude AI tool that the Pentagon sees as unnecessary. Defense officials have previously rejected Anthropic’s demands and insist on being able to run Claude — one of the only AI tools cleared for classified cloud work — without any restrictions from the company. 
 
If Anthropic fails to drop its conditions, the Defense Department has vowed to declare the company a supply-chain risk, a move that would preclude it from working with other defense contractors. The Pentagon has also threatened to invoke the Cold War-era Defense Production Act to use Anthropic’s software over the company’s objections.
 
At stake is up to $200 million in work that Anthropic had agreed to do for the military, along with contracts for other government agencies that could also be imperiled. Amodei said he hopes the Defense Department will revisit its current position of only working with contractors who will agree to an all-lawful-use standard. 
 
“It is the department’s prerogative to select contractors most aligned with their vision,” Amodei said. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” 
 
Now valued at roughly $380 billion, Anthropic was the first AI company granted Pentagon clearance to handle classified material, and its Claude Gov tool quickly became a favored option among defense personnel for its ease of use. The firm faces growing competition from Elon Musk’s xAI, which just won approval for classified work, as well as rivals OpenAI and Google’s Gemini.
 
The Defense Department’s latest contract terms were framed by US officials as a compromise, but they included legal language that could allow Anthropic’s two safeguards on surveillance and autonomous weaponry to be ignored, the company spokesperson added.
 
Amodei said that Anthropic understands that the Defense Department, and not private companies, makes final decisions for the US military. He said the company’s two conditions aren’t an attempt to set policy but rather seek to ensure that still nascent — and occasionally error-prone — AI technology isn’t used in way that exceeds its current capabilities.
 
The feud erupted just weeks after the Pentagon published a new strategy on artificial intelligence that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic barriers to use. The approach specifically urged the Defense Department to choose models that are “free from usage policy constraints that may limit lawful military applications.”
 
Defense officials have reiterated that the military would use Anthropic’s technology within the bounds of the law. The Pentagon has no interest in mass surveillance or developing “autonomous weapons that operate without human involvement,” spokesman Sean Parnell said earlier Thursday, addressing the company’s two main concerns.
 
“We will not let ANY company dictate the terms regarding how we make operational decisions,” Parnell wrote in a post on X. “They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk.”
 

More From This Section

Topics :Artificial intelligenceUS PentagonPentagonUS Military

First Published: Feb 27 2026 | 8:27 AM IST

Next Story