Pentagon used Anthropic AI in Maduro raid as contract faces review: Report
The U.S. operation targeting former Venezuelan leader Nicolás Maduro is exposing a growing rift between the Pentagon and AI developers over how artificial intelligence can be used in military operations.
The Wall Street Journal reported the Pentagon used Anthropic’s AI model, Claude, during the mission, which included bombing several sites in Caracas last month. Anthropic’s public usage policy prohibits using its products to develop weapons, facilitate violence or conduct certain surveillance activities.
Anthropic said it couldn’t comment on whether Claude was used in any particular mission, classified or otherwise, but said any deployment must comply with its usage policies.
Axios reported Monday that Defense Secretary Pete Hegseth is weighing whether to cut ties with Anthropic and potentially designate the company a “supply chain risk,” citing a senior Pentagon official.
What the Journal reported about Claude’s role
According to the Journal, Claude’s deployment occurred through Anthropic’s partnership with Palantir Technologies, whose software is widely used by the Defense Department and federal law enforcement.
The Journal said it was not clear exactly how Claude was used in the operation. After the raid, an Anthropic employee asked a Palantir counterpart how the model had been deployed, people familiar with the matter told the paper. Anthropic said it has not discussed the use of Claude in specific operations with industry partners outside routine technical conversations.
The Journal also reported that Anthropic was the first AI model developer used in classified Defense Department operations. Axios separately reported that Claude is currently the only AI model available in certain classified military systems.
Tension over usage limits
Anthropic’s usage policy bars customers from using its models to develop weapons, facilitate violence or conduct certain surveillance and tracking activities without consent. It also restricts battlefield management and predictive policing applications.
The Journal reported that Anthropic’s contract with the Pentagon — valued at up to $200 million — has faced pressure amid disagreements over those limits. The company has raised concerns about autonomous lethal operations and domestic surveillance, which have become key sticking points in negotiations.
Chief Pentagon spokesman Sean Parnell said the department’s relationship with Anthropic is under review.
“Our nation requires that our partners be willing to help our warfighters win in any fight,” Parnell said, according to the Journal.
Anthropic said it remains “committed to using frontier AI in support of US national security.” Axios reported the company has signaled it may loosen some terms but still wants guardrails around mass domestic surveillance and fully autonomous weapons.
In earlier reporting on the dispute, the Journal reported that Anthropic has said Claude is used “extensively” for U.S. national security missions and that it is in “productive discussions” with the Defense Department about continuing that work.
The post Pentagon used Anthropic AI in Maduro raid as contract faces review: Report appeared first on Straight Arrow News.
