CEO says Anthropic ‘cannot in good conscience’ agree to Pentagon’s AI terms

0
CEO says Anthropic ‘cannot in good conscience’ agree to Pentagon’s AI terms

Anthropic CEO Dario Amodei said his company will not agree to give the Pentagon unfettered access to its artificial intelligence model, Claude, despite threats from the Trump administration.

After a meeting on Tuesday, Defense Secretary Pete Hegseth had given Anthropic until Friday afternoon to grant the access or face consequences.

More than money on the line

Not only did Hegseth threaten to cancel Anthropic’s $200 million Department of Defense (DoD) contract, but he could designate the company a “supply chain risk,” as well. That designation would essentially blacklist Anthropic, because anyone looking to do business with the DoD would have to cut ties with the company.

Hegseth has also considered invoking the Defense Production Act (DPA), according to Axios. The DPA gives the president power to compel private companies to prioritize defense contracts, effectively forcing Anthropic to let the military use its AI.

In a statement, Amodei called Hegseth’s threats “inherently contradictory,” saying, “One labels us a security risk; the other labels Claude as essential to national security.”

Unbiased. Straight Facts.TM

The DoD also has $200M contracts with Google, OpenAI and xAI.

He added, “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”

Amodei said negotiations with the Pentagon continue.

But even though Hegseth gave Anthropic until Friday afternoon to respond, on Wednesday, the Pentagon asked two major defense contractors — Boeing and Lockheed Martin — to provide an assessment of their reliance on Claude, Axios reported. It’s an apparent first step toward designating Anthropic a supply chain risk.

Anthropic wants assurances

Amodei said Anthropic is not backing down from its safeguard requirements.

The company wants assurance the DoD won’t use Claude for fully autonomous weapons or the mass domestic surveillance of Americans. The DoD, however, wants to use it without those limitations — although it has pointed out that spying on Americans is illegal.

Hegseth has said that Anthropic needs to allow the Pentagon full access to its AI for all “lawful” purposes, including AI warfare and surveillance.

Anthropic’s concerns were raised after it was reported that the military used Claude during its mission to capture then-Venezuelan President Nicolás Maduro, which included bombing several sites in Caracas last month.


This story is featured in today’s Unbiased Updates. Watch the full episode here.


Under Anthropic’s usage policy, customers are not allowed to use its models to develop weapons, facilitate violence or conduct certain surveillance and tracking activities without consent. It also restricts battlefield management and predictive policing applications.

Defense officials told NPR this week that under the DPA, the military would keep using the company’s AI tools regardless of how it felt about it.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *