Anthropic says military blacklisting was retaliatory in lawsuit filings
One of the country’s most advanced artificial intelligence companies is suing the federal government, saying its decision to blacklist the organization was rooted in politics, not policy.
Artificial intelligence company Anthropic filed two lawsuits against the U.S. government on Monday after Defense Secretary Pete Hegseth effectively blacklisted the company from working with them. Anthropic alleges that the Trump administration illegally retaliated against the company for its AI safety rules.
On Feb. 27, the Pentagon labeled Anthropic a supply chain risk, effectively blacklisting the company. Hegseth said it was because CEO Dario Amodei refused to remove safeguards, effectively allowing a private company to dictate how the government uses data. Hegseth called the company’s refusal an attempt to “seize veto power over the operational decisions of the United States military.”
Anthropic’s lawsuits allege that this was a form of punishment. It stated that the government “retaliated against a leading frontier AI developer for adhering to its protected viewpoint,” which was “AI safety and the limitations of its own AI model.” They said the Trump administration’s actions violate the Constitution.
It also alleges that the administration exceeded the scope of the Defense Production Act. They are asking a federal judge to block Pentagon officials from enforcing the blacklisting.
Why are Anthropic and the Pentagon fighting?
The lawsuit is the latest move between the U.S. military and Anthropic since The Wall Street Journal reported that military officers used Claude, Anthropic’s chatbot, to target former Venezuelan leader Nicolás Maduro during the U.S. raid that captured him.
Days later, Hegseth announced he had given Amodei a deadline to agree to the government’s full use of its chatbot or face consequences. Amodei released a statement publicly defending his company and its commitment not to use AI for domestic surveillance or for fully autonomous weapons.
The statement didn’t sway the administration, as President Donald Trump posted on social media that Anthropic was a “RADICAL LEFT, WOKE COMPANY” and said its “selfishness is putting AMERICAN LIVES at risk.”
Hegseth followed Trump’s post with his own, saying Anthropic’s guardrails go against American values.
“Anthropic’s stance is fundamentally incompatible with American principles,” Hegseth wrote. “Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered.”
But Anthropic’s lawyers say the company didn’t create Claude for surveillance or weapons. They also said it would go against company principles.
“Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic’s founding purpose and public commitments,” according to the suit.
The Department of Defense has pushed back on Anthropic’s claims that the Pentagon planned to use the chatbot for weapons or surveillance. Instead, they said it was about allowing a private company to dictate how the government uses data. Pentagon officials said all its orders would be lawful.
What’s next for Anthropic?
Since Hegseth’s announcement, Anthropic lost a military contract worth $200 million. However, Amodei has said that was just a small portion of the more than $14 billion in investments the company has already received.
The military recently permitted OpenAI’s ChatGPT and xAI’s Grok to use classified systems. However, that didn’t help all the problems since several major defense companies used Anthropic in their work. Now these companies are searching for a new AI as competent as Claude.
Companies like Palantir, Lockheed Martin and J2 Ventures Portfolio have largely had to switch to ChatGPT. However, most experts say Claude still outperforms most other chatbots.
