By Katrina Manson
The Pentagon has struck agreements with more technology companies for expanded use of advanced artificial intelligence tools on classified military networks, according to a Defence Department statement and two defence officials briefed on the matter.
Nvidia Corp., Microsoft Corp., Reflection AI Inc. and Amazon.com Inc. have all newly struck agreements with the US Defence Department “for lawful operational use,” according to the statement. The officials asked not to be named to discuss internal discussions. On Friday, the Pentagon posted on X that Oracle Corp. had also joined the roster of technology companies that had agreed to deploy their AI tools on classified networks.
The deals provide the Pentagon with wide leeway to potentially use powerful advanced AI technologies for secret combat operations, such as assisting with targeting. The new terms of usage, including “lawful operational use,” substantially water down some of the limits sought by Anthropic PBC that torpedoed its pact with the Pentagon earlier this year.
Many of the technology companies already provide AI tools to the US military, but defence officials have been seeking to expand the terms of use since the fall of 2025. Other technology companies that have recently agreed to similar deals include SpaceX, OpenAI and Google. Oracle’s shares jumped 6.5 per cent to $171.83 on Friday.
“These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force,” according to the Defence Department statement, which refers to the technology companies involved and which also marks the first official Pentagon confirmation of a new accord with Google reported earlier this week.
The effort to deliver new deals with technology companies for maximalist military use of advanced AI comes as the Pentagon is racing to develop viable alternatives to Anthropic’s Claude tool. An acrimonious fracture between Anthropic and senior defence officials exposed a recurring fault line between the Pentagon and Silicon Valley over the looming risks of AI at war.
“This agreement reflects a shared commitment between the Department of War and Oracle to help ensure that the United States leads decisively in artificial intelligence, as a matter of ongoing global leadership and national security,” Kim Lynch, executive vice president of Oracle Government, Defence & Intelligence, said in a statement. “By bringing advanced AI into classified environments, we are translating innovation into operational advantage where and when it matters most.”
The Pentagon negotiated its agreement with Amazon Web Services late into Thursday, according to two Pentagon officials briefed on the talks.
AWS has been committed to supporting the US military for more than a decade, said Tim Barrett, an AWS spokesperson, when asked to comment on the new deal. “We look forward to continuing to support the Department of War’s modernization efforts, building AI solutions that help them accomplish their critical missions.”
Nvidia didn’t immediately provide comment, and a Microsoft spokesperson declined to comment. A representative for Reflection wasn’t immediately available for comment.
The Pentagon refused to heed Anthropic’s stated red lines seeking to limit how the US military can use AI in classified operations during recent renegotiations and sought to eject the company from all defence supply lines. The company didn’t want its technology used for mass domestic surveillance of US citizens or for fully autonomous weapons systems.
Since the fallout with Anthropic, the Pentagon has accelerated its efforts to bring on other AI companies to agree to expanded usage terms for their models and infrastructure on secret and top-secret networks. In addition, defence officials are seeking to ensure the US military avoids depending on any one single company or set of limitations, according to one of the Pentagon officials briefed on the talks.
Nvidia’s new agreement, for instance, gives far greater license to the Pentagon than the terms of use in previous AI deals. The company has agreed not to impose any usage policies or model licenses that would restrict the Defence Department’s use of its models beyond what is required by US law and constitutional authority, according to a person familiar with the agreement, who asked not to be named to discuss sensitive matters.
Nvidia agreed to provide “full and effective use of their capabilities in support of Department missions,” including for autonomous weapons systems development, according to the person.
The Department’s use of any Nvidia models, weights or other capabilities will be consistent with the civil liberties and constitutional rights of Americans under law, the person said, a commitment that stops short of any clearly stipulated monitoring and evaluation mechanisms.
In its statement, Oracle said “its AI strategy is built around openness, interoperability, and choice across the entire technology stack” and which will enable “the Department of War to build, deploy, and scale any model, without vendor lock-in.”
“This approach allows the department to continuously adopt the best AI innovations available while maintaining control over their data, architecture, and long-term technology direction,” Oracle said.
The Department gave itself six months to replace Claude, which is being used for US military operations against Iran. The disagreement is now mired in a court battle.
On Thursday, Secretary of Defence Pete Hegseth described Anthropic’s leader as an “ideological lunatic” and defended his department’s use of AI.
“We follow the law and humans make decisions,” Hegseth told Congress. “AI is not making lethal decisions.”
The Pentagon’s effort to equip the US military with cutting-edge AI at the classified level will help “human-machine teams” that can handle immense volumes of data, said Cameron Stanley, the defence agency’s chief digital and AI officer, in a statement referring to the new deals.
Although OpenAI signed a new agreement for expanded use of its models on classified networks with the Pentagon earlier this year, its tools are still not deployed on classified defence networks, according to an OpenAI spokesperson, who added that implementation is nevertheless underway.
Several campaign groups have highlighted the risks of relying on unpredictable AI-assisted systems in support of life-and-death decisions. AI systems can be prone to error and can lead to automation bias, or a tendency to trust machine outputs over human reasoning, the critics have argued.
Stanley didn’t specify the precise ways in which the Pentagon intends to use AI models in classified operations. He described them as digital tools that would make it easier for the Pentagon to crunch through data, increase understanding in complex environments and make “better decisions, faster.”
Claude is among the AI tools used on Maven Smart System, a digital platform used in support of targeting and battlefield operations during Iran operations. US Central Command has said it is using a variety of AI tools to speed processes.