Pentagon presses Anthropic for unrestricted Claude access, threatening legal leverage

The Pentagon is also experimenting with how much any vendor can retain control over any model once its operational utility becomes important as it races to deploy generative AI to classified systems. That tension has been manifested with a growing conflict with Anthropic over Claude which is already certified to be used in sensitive government settings by third parties.

Image Credit to wikipedia.org

Defense Secretary Pete Hegseth imposed a deadline of a few weeks, by which Anthropic Chief Executive Dario Amodei should sign a document allowing the military complete access to the model to people familiar with the meeting. Authorities were also considering the option of applying and enforcing the Defense Production Act, which is an intensifying tool typically linked to industrial mobilization.

Central to the wrangle is a highly practical query which has huge engineering implications; can frontier models be pushed into classified workflows whilst still having private-sector “usage policies” technically constraining what the tools may be requested to perform. The stance of the Pentagon as outlined by the Pentagon officials and individuals briefed on the negotiations is that the statutory law takes care of government behavior and that extra guardrails on vendors create drag, time, and vagueness on programs that desire to grow rapidly across commands and grades of classification.

That push is consistent with an overarching internal agenda. Recent memo-driven transformation proposes the required data access and quicker access to field AI, such as a popular in-house portal that has spread to over 3 million users of the Defense Department, genai.mil. The use of rapid experimentation, monthly progress reports, and elimination of bureaucratic blockers to adoption are also in the Department strategy, such as faster authorization process and cross-domain access to data.

Anthropic, however, has sought written limitations. According to people who have been privy to the talks the company has requested the Department to take in guardrails such that Claude can be restricted to domestic mass surveillance and does not allow the model to be used in making the final targeting decision without involving a human element. One of the senior officials at the Pentagon questioned by the news of the dispute responded: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

It is not only policy that is in conflict but also reliability in high-stakes environments. Big language models are capable of producing fluent, incorrect text, a failure mode occasionally referred to as hallucination and is more difficult to detect when the sources behind it are categorized and generally unauditable. Defense technologists and researcher have cautioned that, such errors within the mission planning or targeting pipelines can be multiplied quickly, particularly when downstream systems use their outputs as inputs.

At the Pentagon, officials have positioned the procurement dispute. When the government acquires airliners, Hegseth explained to Amodei that the government does not listen to the manufacturers on how the military utilize the aircraft, and that the same should be applied to Claude. Simultaneously, Pentagon officials have reportedly had options: one of the senior officials mentioned Grok, which belongs to xAI, as having been ready to work within classified environments, and that other AI companies were not far behind.

Another type of pressure was also negotiated by the officials designating Anthropic a “supplier chain risk” that would likely marginalize the company in government employment. Anthropic has indicated that it is discussing. A spokesperson told us that “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” a spokesperson said.

According to the engineers who construct the safe AI deployments, the controversy reveals an interface issue that has not yet been resolved: today, it is expected that modern models behave as flexible infrastructure, yet they are treated by their providers as controlled services. The dynamics of access, supervision, and contractual control are growing as material as model performance as increasing capabilities are transferred to classified networks.

spot_img

More from this stream

Recomended

Discover more from Modern Engineering Marvels

Subscribe now to keep reading and get access to the full archive.

Continue reading