Anthropic has gained a powerful ally in its AI safety showdown with the Pentagon, with Microsoft filing a supporting amicus brief in a San Francisco federal court that calls for a temporary restraining order against the Defense Department’s supply-chain risk designation. The brief highlights the potential for widespread disruption to defense and commercial technology supply chains if the designation is allowed to stand. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a separate joint filing, amplifying the industry’s collective voice.
The conflict originated in a $200 million contract negotiation that broke down after Anthropic refused to allow its Claude AI to be used for mass surveillance of US citizens or to control autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, triggering the cancellation of Anthropic’s existing government contracts and drawing a firm line against renegotiation. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC.
Microsoft’s brief is anchored in its own use of Anthropic’s AI in systems it provides to the federal military, as well as its participation in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract. The company also holds additional government agreements spanning defense, intelligence, and civilian agencies. Microsoft publicly argued that the government and tech industry must work together to ensure that AI advances national security responsibly, without enabling surveillance or autonomous warfare beyond human control.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for the company’s public advocacy of AI safety principles. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the real basis for its contract demands. The Pentagon’s technology chief publicly ruled out any possibility of renewed negotiations.
Congressional Democrats have formally asked the Pentagon whether AI was involved in a strike in Iran that reportedly killed more than 175 civilians at a school, including whether human oversight was exercised. These congressional inquiries are adding legislative urgency to an already extraordinary legal confrontation. The combination of Microsoft’s powerful intervention, the industry coalition, and congressional pressure is shaping a defining chapter in the history of AI governance in America.