Home » Microsoft and Four Tech Giants Back Anthropic as Pentagon’s AI Ethics Crackdown Faces Courtroom Test

Microsoft and Four Tech Giants Back Anthropic as Pentagon’s AI Ethics Crackdown Faces Courtroom Test

by admin477351

The Pentagon’s crackdown on Anthropic over AI ethics is now facing its first major courtroom test, with Microsoft and four other technology giants backing the AI company through formal legal filings. Microsoft submitted an amicus brief in a San Francisco federal court calling for a temporary restraining order against the Pentagon’s supply-chain risk designation. Amazon, Google, Apple, and OpenAI have also signed on to a separate supporting brief, creating a unified industry front against the government’s action.
Anthropic’s legal battle was triggered by the Pentagon’s decision to label it a supply-chain risk after contract negotiations over $200 million worth of AI deployment on classified military systems broke down. The company had refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons. Defense Secretary Pete Hegseth formalized the designation, leading to the cancellation of Anthropic’s government contracts and the permanent closure of renegotiation talks.
Microsoft’s involvement in the case is rooted in its direct integration of Anthropic’s AI tools into military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional federal agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft publicly argued that the government and the technology industry must work together to ensure that advanced AI serves national security without crossing ethical lines.
Anthropic argued in its court filings that the supply-chain risk designation was an unconstitutional act of ideological retaliation against a US company for its public advocacy of responsible AI development. The company disclosed that it does not currently believe Claude is safe or reliable enough for autonomous lethal operations, which it said was the genuine technical and ethical basis for its contract demands. Anthropic also noted that no US company had ever previously received this designation.
Congressional Democrats are separately demanding answers from the Pentagon about whether AI was involved in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal letters ask specifically about AI targeting systems and human oversight processes. The convergence of these congressional inquiries with Anthropic’s lawsuits and the industry’s unified legal response is creating a defining moment for the regulation of artificial intelligence in American national security.

You may also like