The Pentagon Anthropic AI military deadline has heightened tensions between the US Defence Department and artificial intelligence firm Anthropic. Officials have reportedly given the company until Friday to permit unrestricted military use of its technology.
According to a senior defence official, failure to comply could trigger action under the Defence Production Act. This Cold War-era law allows the federal government to compel private companies to prioritise national security requirements.
Anthropic CEO Dario Amodei recently met Defence Secretary Pete Hegseth at the Pentagon. The meeting followed disagreements over how Anthropic’s Claude models may be deployed in military contexts.
At the centre of the dispute is Anthropic’s refusal to allow its models to be used for mass surveillance of US citizens or for fully autonomous weapons systems. The company maintains that these restrictions align with its responsible AI development policy.
In a public statement, Anthropic said it engaged in good-faith discussions to support national security while preserving safeguards around model use. However, after the meeting, the Pentagon reportedly imposed a 5:01 pm deadline for an agreement.
Read: Anthropic Persona Selection Model Explains Why AI Feels Human
The situation reflects broader tensions between national security priorities and ethical AI governance. While defence officials emphasise operational flexibility, Anthropic continues to highlight the importance of clear usage boundaries.
The outcome of the Pentagon’s Anthropic AI military deadline could shape future collaboration between technology companies and government agencies on advanced AI systems.