A federal judge has temporarily blocked the US Department of Defense from designating Anthropic, the AI developer behind the Claude chatbot, as a supply-chain risk. The ruling, issued Thursday by Judge Rita Lin in San Francisco, effectively pauses the Pentagon’s efforts to restrict Anthropic’s access to government contracts and could allow the company to regain business with federal agencies.

Why This Matters: AI and National Security

The Pentagon’s move against Anthropic stems from disagreements over usage restrictions the AI firm placed on its technology. The Trump administration viewed these limits as unacceptable, leading to a designation that effectively sidelined Anthropic from lucrative government deals. This case highlights a growing tension: how much control should the government have over private AI development, especially when it comes to military applications? The dispute isn’t just about one company; it’s a test case for broader AI regulation and national security concerns.

The Ruling: A Temporary Reprieve

Judge Lin found the Pentagon’s designation “likely both contrary to law and arbitrary and capricious.” She argued the government provided no valid reason to assume Anthropic would sabotage its own technology simply because it wanted to control how it was used. The injunction restores conditions to February 27, before the restrictions were imposed, allowing Anthropic to continue operating as before while the legal battle continues.

What’s Next?

The Pentagon remains free to cancel contracts with Anthropic or encourage partners to drop its tools, but now it can’t officially cite the supply-chain-risk label as justification. Anthropic can leverage this ruling to reassure clients concerned about working with a blacklisted vendor. However, a second lawsuit filed by Anthropic is still pending in a Washington, DC court, and the Pentagon could still pursue alternative legal routes.

“This ruling does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations,” Judge Lin wrote, underscoring the limited nature of the immediate relief.

The long-term outcome remains uncertain, but the judge’s decision signals a willingness to scrutinize the government’s aggressive tactics in regulating the AI industry.

In short, Anthropic has bought itself time, but the broader conflict between private AI developers and government control is far from over.