AI company Anthropic has filed a lawsuit against several U.S. government agencies. The company says the government unfairly blacklisted its AI technology after Anthropic refused to allow its systems for certain military uses.
The dispute involves Anthropic’s Claude AI models. The company restricts its use in autonomous weapons and mass surveillance. Government officials asked Anthropic to remove these safety limits. The company refused.
Tensions escalated when a federal directive, issued under Donald Trump, ordered agencies to stop using Anthropic technology. The Department of War then labeled Anthropic a “Supply-Chain Risk to National Security.”
This designation blocked military contractors and partners from working with Anthropic. Several agencies canceled contracts or stopped using its AI systems.
Anthropic claims these actions violate the First Amendment, the Administrative Procedure Act, and due-process rights. The company says the measures are retaliation for enforcing safety limits.
The lawsuit seeks a court declaration that the government’s actions are unlawful. Anthropic also wants the directives blocked while the case is ongoing.
The company warns the government’s actions have already caused lost contracts and could threaten hundreds of millions of dollars in business, as well as its reputation.