A visualization of the Pentagon and a warning-marked AI system illustrates rising tensions over how frontier AI technologies should be governed in national security environments. Image Source: ChatGPT - 5.2

Pentagon Labels Anthropic a Supply-Chain Risk as OpenAI Revises Defense AI Deal


OpenAI says it is amending its agreement with the U.S. Department of Defense — renamed the Department of War (DoW) under the Trump administration — following backlash over the company’s involvement in military AI deployments.

The revisions clarify restrictions on domestic surveillance and intelligence use after critics raised concerns that advanced AI systems could be deployed in national security contexts without sufficient safeguards.

The controversy emerged after negotiations between the Pentagon and AI company Anthropic collapsed. Anthropic, which previously held a $200 million Pentagon contract, refused expanded terms that would have allowed its AI technology to be used for “any lawful purpose.”

The dispute has drawn attention across the AI industry, triggering consumer backlash, protests outside OpenAI’s headquarters, and intervention from major technology companies concerned about how government procurement decisions could affect the broader AI ecosystem.

The episode has quickly become a test case for how governments and AI developers define the guardrails governing the use of frontier AI in military and intelligence systems.

Here’s what the dispute reveals about how governments and AI companies are beginning to negotiate the rules, safeguards, and limits for deploying powerful AI technologies in national security environments.

Key Takeaways: OpenAI, Anthropic, and the Pentagon AI Contract Dispute

  • OpenAI is revising its Pentagon AI contract to clarify restrictions on domestic surveillance and intelligence-agency access.

  • Anthropic rejected expanded military terms allowing AI systems to be used for “any lawful purpose,” citing concerns about domestic surveillance and autonomous weapons.

  • Consumer backlash followed OpenAI’s defense deal, including a 295% surge in ChatGPT uninstallations and rising downloads of Anthropic’s Claude app.

  • The Pentagon threatened to label Anthropic a national-security “supply chain risk,” prompting pushback from major technology industry groups.

  • Anthropic has reopened negotiations with the Pentagon, seeking a revised agreement that preserves its safeguards while maintaining its government contract.

Anthropic–Pentagon AI Contract Talks Collapse Over Surveillance Safeguards

The dispute began after Anthropic, which had previously been awarded a $200 million Pentagon contract and became the first AI model deployed in classified national security environments, failed to reach agreement on expanded terms governing the military’s use of its AI technology.

According to reporting, the Pentagon sought language allowing AI systems to be used for “any lawful purpose.” Critics argue that the phrase “any lawful purpose” can be interpreted broadly, particularly as laws, national-security policies, and political administrations change over time.

Anthropic requested additional safeguards explicitly preventing its AI systems from being used for mass domestic surveillance of Americans, or to enable autonomous weapons capable of making lethal decisions without human oversight — restrictions the company has identified as key red lines.

Negotiations collapsed after the parties failed to agree on wording that Anthropic believed was necessary to ensure those safeguards would hold in practice.

The dispute escalated when Defense Secretary Pete Hegseth threatened to designate Anthropic a “supply chain risk to national security,” a classification typically reserved for foreign adversaries rather than domestic technology companies — a move that could force contractors in the defense supply chain to cut ties with the company, limit its participation in government programs, and pressure firms with federal contracts to reconsider investments or partnerships with Anthropic.

OpenAI Signs Pentagon AI Deployment Agreement for Classified Systems

The timing of the agreement drew scrutiny because it came shortly after Anthropic declined expanded Pentagon contract terms over concerns about mass domestic surveillance and autonomous weapons safeguards, creating the perception among critics that OpenAI was willing to accept those conditions another frontier AI lab had rejected.

According to reporting by TechCrunch, Anthropic CEO Dario Amodei criticized OpenAI’s messaging around the Pentagon agreement in a letter to staff, calling some of the company’s claims “straight up lies” and accusing OpenAI CEO Sam Altman of falsely “presenting himself as a peacemaker and dealmaker.”

The disagreement centered in part on how the companies characterized the Pentagon negotiations. Anthropic had objected to contract language allowing its AI systems to be used for “any lawful purpose,” while OpenAI said its own agreement allows use of its AI systems for “all lawful purposes.”

In a blog post, OpenAI said the Department of War (DoW) had made clear that mass domestic surveillance would be illegal and was not under consideration.

“It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose,” the company wrote. “We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”

OpenAI has said the agreement includes several safeguards designed to limit how its AI models can be used in national security settings, which the company argues address many of the concerns raised by critics about surveillance and autonomous weapons.

According to OpenAI, the agreement includes three primary “red lines”:

No use of OpenAI technology for mass domestic surveillance
No use of OpenAI models to direct autonomous weapons systems
No use of the technology for high-stakes automated decision-making

OpenAI CEO Sam Altman has also argued that decisions about national security policy ultimately belong to elected governments rather than technology companies. While AI developers can establish safety limits for how their AI systems are deployed, Altman said determining how those systems are used in defense operations is a responsibility that rests with government institutions.

OpenAI said the deployment model is also designed to support those safeguards. Rather than installing AI models directly on military hardware, the systems would run through cloud-based infrastructure with a safety stack maintained by OpenAI. Cleared company engineers would remain involved in deployment and oversight, and the Pentagon plans to convene a working group with frontier AI labs and cloud providers to discuss additional safeguards for future national-security deployments.

Consumer Backlash: ChatGPT Uninstalls Surge as Claude Downloads Rise

OpenAI’s agreement with the Pentagon quickly triggered a public reaction, with some users protesting the company’s involvement in military AI deployments.

According to market intelligence data cited by TechCrunch, U.S. uninstallations of the ChatGPT mobile app surged 295% in a single day after the Pentagon partnership became public. At the same time, downloads of Anthropic’s Claude AI assistant increased significantly, rising 37% one day and 51% the next.

App rankings reflected the shift, with Claude climbing to the No. 1 free iPhone app in the United States shortly after the controversy emerged. The surge reflected growing public scrutiny over how frontier AI systems may be used in national security contexts.

The backlash also appeared in user reviews, where one-star ratings for ChatGPT rose sharply while five-star reviews declined.

Critics said the dispute highlighted broader concerns that advanced AI systems could eventually be used for mass surveillance or integrated into autonomous weapons systems — two risks that several AI companies say require strict safeguards.

AI Activists Protest OpenAI Defense Deal Outside San Francisco Headquarters

Public criticism extended beyond online reactions as activists also organized in-person demonstrations.

A grassroots campaign called QuitGPT staged a protest outside OpenAI’s headquarters in San Francisco, criticizing the company’s defense partnership and raising concerns about the potential use of AI for surveillance or autonomous weapons.

Organizers framed the demonstration around demands that AI companies refuse contracts enabling “killer robots” or mass AI surveillance.

While the protest was relatively small, it reflected growing public scrutiny over how advanced AI technologies may be deployed in national security systems and who should define the safeguards governing their use.

What Changed in OpenAI’s Pentagon AI Agreement: Clearer Surveillance and Intelligence Limits

Following the backlash, OpenAI CEO Sam Altman said the company is working with the Department of War (DoW) to amend the agreement and clarify how its AI systems may be used in national security environments.

OpenAI said the revised language adds clearer restrictions than the original announcement, including an explicit prohibition on using its AI systems for domestic surveillance of U.S. persons — including through commercially acquired personal data — and a statement that OpenAI services will not be used by Department of War intelligence agencies such as the National Security Agency (NSA) without a separate agreement.

In a post on X, Altman confirmed that the updated contract language now explicitly states that OpenAI systems cannot be used for domestic surveillance of U.S. persons.

OpenAI has also reiterated that its AI systems will not be used to direct autonomous weapons or make lethal decisions without human control — another safeguard the company describes as a core “red line” for military deployments.

Altman acknowledged that the company’s initial communication about the Pentagon AI deal may have contributed to confusion.

“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication.”

Tech Industry Groups Push Back on Pentagon ‘Supply Chain Risk’ Threat

The controversy also triggered tensions within the AI industry.

Several technology and software industry groups pushed back against the Pentagon’s threat to label Anthropic a “supply chain risk to national security.”

A coalition including members such as Nvidia, Google, Microsoft, Apple, and Amazon warned that such a designation would represent an unprecedented use of national-security procurement tools against a U.S. technology company.

The groups argued that disputes between the U.S. government and private technology companies should instead be resolved through standard procurement negotiations rather than through national-security risk designations.

Sam Altman also said in public remarks that OpenAI opposed labeling Anthropic a supply chain risk, arguing that such a designation could set a damaging precedent for the technology industry.

The episode highlighted how government procurement decisions can influence which AI companies remain eligible to participate in national-security programs, while pressuring companies that rely on federal contracts to reconsider investments or partnerships with Anthropic.

Anthropic Reopens Negotiations With Pentagon to Preserve AI Contract

Despite the earlier breakdown in negotiations, Anthropic is now back in discussions with the Pentagon to reach a revised agreement governing how its AI models may be used in military environments.

According to reporting by the Financial Times, a revised contract could allow the U.S. military to continue using Anthropic’s AI technology while potentially preventing the Pentagon from designating the company a “supply chain risk to national security.”

The earlier negotiations collapsed over a disagreement about specific contract language related to surveillance safeguards. Pentagon officials asked Anthropic to remove a phrase referring to the “analysis of bulk acquired data,” which the company believed was necessary to prevent its AI systems from being used in mass domestic surveillance programs.

In a memo to staff reported by The Information and reviewed by the Financial Times, Anthropic CEO Dario Amodei said the Department of War (DoW) offered to accept the company’s broader terms if that line were deleted.

“Near the end of the negotiation the [department] offered to accept our current terms if we deleted a specific phrase about ‘analysis of bulk acquired data,’ which was the single line in the contract that exactly matched this scenario we were most worried about,” Amodei wrote. “We found that very suspicious.”

Anthropic has identified restrictions on mass domestic surveillance and lethal autonomous weapons as key red lines for military AI deployments.

A revised agreement could allow Anthropic to remain part of the U.S. military’s AI supply chain while clarifying the safeguards governing how its AI models may be used. However, negotiations remain ongoing, and it is unclear whether the two sides will reach a compromise that satisfies both the Pentagon’s operational requirements and Anthropic’s restrictions on surveillance and autonomous weapons.

Pentagon Labels Anthropic a “Supply Chain Risk” as AI Contract Dispute Escalates

The dispute escalated further on Thursday when the U.S. Department of Defense (Pentagon) formally notified Anthropic that the company and its AI systems had been designated a “supply chain risk,” according to reporting by Bloomberg cited by TechCrunch.

Supply-chain-risk designations are typically reserved for foreign adversaries. The classification requires companies and agencies working with the Pentagon to certify that they are not using Anthropic’s AI models, a requirement that could disrupt existing partnerships and discourage companies with federal defense contracts from maintaining investments or technical integrations with the firm.

The designation follows weeks of conflict between Anthropic and defense officials over how the U.S. military should be allowed to use advanced AI systems. Anthropic CEO Dario Amodei had previously refused contract language that could allow the company’s AI models to be used for mass domestic surveillance of Americans or to power fully autonomous weapons systems operating without human oversight, which the company has identified as key red lines.

Defense officials have argued that the government’s use of AI systems in national security operations should not be constrained by the policies of private contractors.

The move could also create operational challenges for the Pentagon. Anthropic’s Claude models are among the few frontier AI systems configured for classified environments and are already embedded in military intelligence and data-analysis systems.

According to reporting cited by Bloomberg, U.S. forces are currently using Claude-powered systems in their Iran campaign to help manage large volumes of operational data for military analysis. The model is integrated into Palantir’s Maven Smart System, a platform widely used by military analysts in the Middle East to process intelligence data and support operational decision-making.

The operational reliance on the system highlights a contradiction in the dispute: Claude is already embedded in existing military workflows, meaning the Pentagon cannot easily remove the AI technology without replacing critical data-analysis capabilities.

The situation underscores the unusual nature of the designation: the Pentagon is labeling Anthropic a supply-chain risk even as its AI systems remain embedded in active military operations.

The designation has also drawn pushback from within the broader AI industry. Hundreds of employees from OpenAI and Google have urged the Department of Defense to withdraw the designation and called on Congress to push back on what they described as a potentially inappropriate use of authority against a U.S. technology company.

In a public statement, the employees also urged their companies to stand together in refusing government demands to use AI systems for domestic mass surveillance or “autonomously killing people without human oversight.”

Anthropic has not publicly responded to the designation, and negotiations between the company and Pentagon officials are continuing.

Q&A: OpenAI–Anthropic Pentagon AI Contract Dispute

Q: What triggered the dispute between AI companies and the Pentagon?
A: Negotiations between the Pentagon and Anthropic collapsed after the government requested contract language allowing AI systems to be used for “any lawful purpose.” Anthropic sought explicit restrictions preventing domestic surveillance and autonomous weapons use.

Q: Why did OpenAI sign a Pentagon AI contract?
A: After Anthropic declined expanded military contract terms, OpenAI reached its own agreement with the Pentagon to deploy AI systems in classified environments with technical safeguards and contractual limits.

Q: What changes is OpenAI making to its agreement?
A: OpenAI is adding contract language clarifying that its AI systems cannot be used for domestic surveillance of U.S. persons and will not be provided to intelligence agencies such as the National Security Agency (NSA) without a separate agreement.

Q: What is Anthropic doing now?
A: Anthropic has reopened negotiations with Pentagon officials in an attempt to reach a revised agreement that preserves its safeguards while allowing the U.S. military to continue using its AI technology.

Q: Why did the tech industry intervene?
A: Technology industry groups warned that labeling Anthropic a national-security “supply chain risk” would be an unprecedented use of procurement rules against a U.S. technology company and could disrupt the broader AI ecosystem.

Q: Why did the Pentagon label Anthropic a “supply chain risk”?
A: The U.S. Department of Defense designated Anthropic a supply-chain risk after negotiations broke down over how the U.S. military should be allowed to use advanced AI systems. Anthropic had refused contract language that could allow its AI models to be used for mass domestic surveillance or fully autonomous weapons without human oversight. The designation requires Pentagon contractors to certify that they are not using Anthropic’s AI models, though the company’s Claude systems remain embedded in some existing military analysis tools.

What This Means: AI Governance and Military Technology

The dispute between OpenAI, Anthropic, and the Pentagon highlights how decisions about surveillance safeguards and autonomous weapons are becoming central governance questions as governments begin deploying frontier AI systems in national security environments.

Who should care:

Policymakers, defense leaders, AI developers, and technology companies building frontier models all have a stake in how governments integrate AI into national security systems.

For governments, the challenge is adopting powerful AI tools while protecting civil liberties and maintaining oversight. For AI companies, the stakes involve defining how their AI technology can be used without enabling surveillance abuses or autonomous weapons systems that exceed their safety principles.

The Pentagon’s decision to label Anthropic a supply-chain risk also demonstrates the leverage governments can exert through procurement rules when disagreements arise over how AI systems should be deployed.

The decisions made in these early AI defense contracts could influence whether AI systems are used to analyze large volumes of civilian data and how much control humans retain over high-stakes military decisions, including the potential use of autonomous weapons.

Because these agreements are among the first attempts to deploy frontier AI systems in classified environments, the terms negotiated today could set precedents for how future military AI systems are governed.

Why it matters now:

Governments around the world are rapidly exploring how advanced AI systems can support intelligence analysis, cybersecurity, logistics, and battlefield decision-making.

At the same time, the companies developing those AI systems are attempting to define safeguards that limit how their technology can be used to prevent misuse. The OpenAI–Anthropic dispute shows how difficult that balance can be — especially when contract language such as “any lawful purpose” leaves room for interpretations that AI developers consider risky.

Public backlash, industry pressure, and government policy are now colliding in real time as these rules governing military AI deployments are being negotiated.

The episode also reveals how quickly AI systems can become embedded in critical infrastructure. Anthropic’s Claude models are already integrated into military data-analysis platforms used in active operations, highlighting how difficult it can be for governments to disengage from a technology provider once those AI systems are operational.

What decision this affects:

The episode will likely influence how future AI-government partnerships define safeguards before deployment, rather than attempting to address those concerns after contracts are signed.

AI companies and governments will increasingly need to negotiate clear governance frameworks — covering surveillance limits, autonomous systems, oversight mechanisms, and technical safeguards — before advanced AI systems are integrated into national security environments.

The OpenAI–Anthropic dispute shows that the future of military AI will not be shaped by technology alone — it will be determined by how governments use their procurement power and how AI developers enforce the safeguards they believe should govern its use.

Sources:

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.

Keep Reading