The Pentagon is considering ending its relationship with artificial intelligence company Anthropic over its insistence on maintaining certain restrictions on how the U.S. military uses its models, Axios reported Saturday, citing an administration official.
The Pentagon is pushing four AI companies to let the military use their tools for “all lawful purposes,” including in the areas of weapons development, intelligence collection and battlefield operations, but Anthropic did not agree to those terms and the Pentagon has had enough after months of negotiations, according to the Axios report.
Other companies included OpenAI, Google and xAI.
An Anthropic spokesperson said the company has not discussed the use of its Claude AI model for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government had so far focused on a specific set of use policy issues, including strict limits on fully autonomous weapons and mass domestic surveillance, none of which were related to current operations.
The Pentagon did not immediately respond to Reuters’ request for comment.
Anthropic’s AI model, Claude, was used in the U.S. military operation to capture former Venezuelan President Nicolas Maduro, with Claude deployed through Anthropic’s partnership with data firm Palantir PLTR.O, the Wall Street Journal reported Friday.
Reuters reported on Wednesday that the Pentagon was pushing major AI companies, including OpenAI and Anthropic, to make their artificial intelligence tools available on classified networks without many of the standard restrictions the companies place on users.
