Pentagon waging war on American genius in name of 'national security'
As debate grows over Pentagon's use of artificial intelligence, concerns mount over unreliable AI systems, hallucinations, and risks of deploying autonomous weapons in high-stakes military decisions
)
The Pentagon is supposed to wage war on America’s enemies, not its greatest assets and most important values (Photo:
Listen to This Article
By Gautam Mukunda
The Department of War is living up to its rebranded name. Unfortunately, its target is a vital American company.
Defence Secretary Pete Hegseth gave Anthropic Chief Executive Officer Dario Amodei until Friday at 5:01 p.m. to remove two restrictions on how the military uses the company’s AI. The restrictions: no mass surveillance of American citizens, and no fully autonomous weapons without a human in the loop.
Anthropic agreed to everything else, from missile defence to cyber operations. It is the first and only frontier AI lab on classified systems. Its technology was used in the capture of Nicolás Maduro. This is not a pacifist company. It drew two lines.
But before the deadline had even passed, President Donald Trump banned all government departments from using Anthropic’s AI. After the deadline, the Pentagon declared Anthropic a “supply chain risk.” That designation, normally reserved for foreign companies like Huawei, bans every defence contractor from doing business with Anthropic.
Also Read
The Pentagon is supposed to wage war on America’s enemies, not its greatest assets and most important values.
In 2018, thousands of Google engineers signed a letter declaring that their company “should not be in the business of war.” Google caved to their demands and pulled out of Project Maven, its AI contract with the Pentagon. That was disgraceful. US servicemembers deserve the best that US technologists can produce, and the Pentagon has every right to say that within very broad lines, it must be free to use those tools as it deems best. Imagine a Delta Force operative reading through terms of service before firing a weapon. Nobody wants that.
But that’s not what’s happening here. These are categorical limits on two uses that most Americans oppose, that today’s AI is not reliable enough to perform, and that a Pentagon spokesman says the military has “no interest” in pursuing. Which means either the confrontation is about something other than military capability, or the Pentagon is not being straight about its intentions.
These are restrictions everyone should support. I use Claude, Anthropic’s AI. When I was researching a recent column, I asked it to find sources — and every single link it provided was fabricated. This is called hallucination, and it is not a bug that better engineering will fix. A 2025 paper by researchers at OpenAI and Georgia Tech offered a mathematical proof that hallucinations cannot be fully eliminated under current AI architectures. When this happens in my research, I waste an afternoon. When it happens in a weapons system, someone dies.
And hallucination might be the least of the problems with weaponised AI. This week, Kenneth Payne at King’s College London published a study pitting three leading AI models against each other in simulated geopolitical crises. The models deployed nuclear weapons in 95 per cent of scenarios. None ever chose to surrender or withdraw, even when losing. So when Anthropic says that AI is not reliable enough for autonomous weapons, it is being generous.
Domestic surveillance is an obvious bright line. Amodei himself has written that a sufficiently powerful AI could “gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” No administration, of either party, should be trusted with that capability aimed at the US public.
But the confrontation is also about something even more fundamental. Voltaire once wrote that the British liked to shoot an admiral from time to time, “to encourage the rest.” The administration is applying that approach to Anthropic. It’s trying to intimidate every American company. David Sacks, the White House AI czar, has attacked Anthropic’s restrictions as “woke AI,” putting the fight into familiar culture war territory.
And the consequences for Anthropic would be severe. The company just raised $30 billion at a $380 billion valuation. A supply chain designation would force Boeing and Lockheed Martin to sever ties. Investors do not fund companies the government is trying to destroy.
Many of Silicon Valley’s leaders donated millions to this administration. They sat behind the president at his inauguration. They are donating to his new ballroom. They have been largely silent as the administration extracted equity from Intel, export taxes from Nvidia and AMD, and obedience from nearly everyone else. If the government can do this to a $380 billion company for refusing to help spy on Americans, no company is safe. The CEOs who empowered this administration need to understand that it is turning on the industry. They can speak up now, or they can wait for their turn in the barrel.
The Pentagon has found its enemy: It is American innovation, American values, and any American company with the courage to defend them. It is long past time for someone other than Dario Amodei to say so.
(Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)
More From This Section
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Feb 28 2026 | 8:19 AM IST

