Anthropic on Friday hit back after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate the artificial intelligence (AI) upstart as a "supply chain risk."
"This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons," the company said.
"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
In a social media post on Truth Social, U.S. President Donald Trump said he was ordering all federal agencies to phase out the use of Anthropic technology within the next six months. A subsequent X post from Hegseth mandated that all contractors, suppliers, and partners doing business with the U.S. military cease any "commercial activity with Anthropic" effective immediately.
"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security," Hegseth wrote.
The designation comes after weeks of negotiations between the Pentagon and Anthropic over the use of its AI models by the U.S. military. In a post published this week, the company argued that its contracts should not facilitate mass domestic surveillance or the development of autonomous weapons, citing reasons that the technology isn't capable enough to support them safely and reliably.
"We support the use of AI for lawful foreign intelligence and counterintelligence missions," Anthropic noted. "But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties."
The company also called out the U.S. Department of War's (DoW) position that it will only work with AI companies that allow "any lawful use" of the technology, while removing any safeguards that may exist, as part of efforts to build an "AI-first" warfighting force and bolster national security.
"Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological 'tuning' that interferes with their ability to provide objectively truthful responses to user prompts," a memorandum issued by the Pentagon last month reads.
"The Department must also utilize models free from usage policy constraints that may limit lawful military applications."
Responding to the designation, Anthropic described it as "legally unsound" and said it would set a dangerous precedent for any American company that negotiates with the government. It also noted that a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of DoW contracts, and that it cannot affect the use of Claude to serve other customers.
Sean Parnell, the Pentagon's chief spokesperson, said in a Thursday X post that the department has no interest in conducting mass domestic surveillance or deploying autonomous weapons without human involvement, describing the narrative as "fake."
"Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," Parnell said. "This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions."
The ongoing stalemate has also polarized the tech industry. Hundreds of employees at Google and OpenAI have signed an open letter urging their companies to stand with Anthropic in its clash with the Pentagon over military applications for AI tools like Claude. xAI CEO Elon Musk sided with the Trump administration on Friday, saying "Anthropic hates Western Civilization."
The standoff between Anthropic and the U.S. government comes as OpenAI CEO Sam Altman said OpenAI reached an agreement with the U.S. Department of Defense (DoD) to deploy its models in their classified network. It also asked DoD to extend those terms to all AI companies.
"AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said in a post on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
OpenAI Says its Agreement Adheres to Three Red Lines
The public spat between Anthropic and the U.S. government has led to its Claude chatbot jumping to the top slot on Apple's chart of top U.S. free apps, even as OpenAI CEO Sam Altman said the company's designation as a supply chain risk sets an "extremely scary precedent."
The AI company also shared more details on its agreement with the Pentagon for deploying advanced AI systems in classified environments, adding "we think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
It said the work with the DoW is guided by three red lines. This includes no use of OpenAI technology for mass domestic surveillance, to direct autonomous weapons systems, and for high-stakes automated decisions, such as social credit systems.
"In our agreement, we protect our red lines through a more expansive, multi-layered approach," the company said in a statement. "We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections."
Altman also said the company will consider committing to publishing every change to the red lines in the future, along with a public explanation and a mandatory notice period before it takes effect.
In a LinkedIn post, OpenAI's head of national security partnerships Katrina Mulligan argued that its contract limits the deployment to cloud API, gives it control over the models and safety stack deployed, and human AI experts are in the loop to make any modifications if its "models aren't refusing queries they should, or there's more operational risk than we expected."
"Autonomous systems require inference at the edge," Mulligan said. "By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware."
OpenAI's deal also coincides with a report from The Wall Street Journal, which revealed that the U.S. conducted a major air attack on Iran with the help of Anthropic's AI tools despite disagreements over its use.
More Details About OpenAI's Deal Emerge
In a new report, The Verge said OpenAI's deal with the Pentagon is "much softer" than Anthropic's, adding "OpenAI agreed to follow laws that have allowed for mass surveillance in the past." A Trump administration official confirmed that the agreement "flows from the touchstone of 'all lawful use,'" while including certain mutually agreed-upon safety mechanisms.
OpenAI has emphasized that "any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose."
The company also noted that its agreement "does not permit uses of our models for unconstrained monitoring of U.S. persons' private information, and all intelligence activities must comply with existing U.S. law."
However, Techdirt's Mike Masnick pointed out that EO 12333 is "one of the largest loopholes for surveilling Americans' communications that the intelligence community possesses," and that OpenAI's red lines are defined by the U.S. government's "carefully constructed legal definitions" that have been refined and stretched to facilitate domestic surveillance.
The backlash triggered by concerns that the deal could enable mass surveillance of Americans has led OpenAI and the Pentagon to "strengthen their recently agreed contract," Axios reported, citing sources familiar with the pact.
Specifically, the language used in the contract states that the "AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." This prohibition extends to analyzing commercially available data obtained from data brokers. As of writing, the Pentagon has not sent a formal notice to Anthropic designating it a supply chain risk.
"It's critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information," Altman said on X.
(The story was updated after publication with the latest developments.)




