After a lethal mission, the Pentagon and AI firms collide over who controls code

The use of violence is the role of military, and when you offer anything to the Defense Department, you are likely to make it available to an agency that is going to harm people, as was stated by the former secretary of Air Force Frank Kendall.

Image Credit to depositphotos.com

The above, crude by Washington standards, is the emphasis point that is currently rocketing through the Pentagon push into frontier AI: the government wants systems it can command at operational speed, whereas at least some of the companies that manufacture said systems would like a continuing voice in the applications of those systems. The point of tension has been most apparent at Anthropic whose Claude model is already incorporated into an expanding range of government processes and has become one of the few leading models accepted to be used in some classified environments.

Until very recently, such a fit as fast-moving AI capability and a Pentagon that is determined to modernize appeared to be a clean fit. Then at the beginning of January a military action sounded alarm bells internally concerning the meaning of providing so-called “general-purpose “intelligence to an institution devoted to coercion and what responsibilities, if any, are involved in doing so. People who talked privately about the issue said that Anthropic had considered whether it would go ahead and work under conditions that the defense leaders have repeatedly attempted to equalize among vendors: the right to use it “for all legal purposes.”

The posture of the Pentagon is colored by a larger re-architecture of software, model and data access acquisitions and placement. An January strategy package outlined an “AI-first” acceleration with branded programs including Swarm Forge and GenAI.mil that were intended to cut down the gap between prototype and deployment and institutionalize iterative experimentation. It was also through the same documents that barrier removal, mutual permission to be used, and forced access to data were emphasized to ensure that models can be trained and evaluated using the own holdings of the government. The center of gravity of the policy as stipulated in the strategy is any legal application of AI systems as opposed to customized limitations that are negotiated case by case.

Anthropic has claimed that it uses frontier AI in support of U.S. national security and Claude is used nationwide “in accordance with our Usage Policy.” The company has also refuted any discussion with the Defense Department regarding certain operations, and affirmed that it had not expressed any concerns with the industry partners beyond regular technical discussions. Nonetheless, the officials have indicated that indecency itself can be a vulnerability especially in a world in which models are increasingly becoming up-the-value assets of other contractors, integrators and data platforms.

It is at that point that a technical procurement dispute becomes a systems-engineering problem. Contemporary AI implementations are not as much of providing a single tool but rather of integrating a learning system into data streams, authorization policies, and downstream applications that undergo continuous change. It is possible that a model that was not constructed explicitly to triage documents in a week or to affect targeting analysis the following week (though its developers may have never intended it to be so). The existing strategy of the Pentagon tries to address this ambiguity using the language of contracts and centralized management: transfer the department to common terms, retain access to the latest versions of models and develop common infrastructure to allow capabilities to be reused and scaled rapidly. Meanwhile, Congress has been introducing more guardrails into defense policy such as the Artificial Intelligence Futures Steering Committee of Section 1535 and the establishment of sandbox environments to experiment and deploy, a recognition that adoption and risk mitigation now must be co-engineered.

Top military officials have also mentioned a need to operationalize AI not only as automation, but also as global decision support. Gen. The Joint Chiefs chairman, Dan Caine has indicated a requirement of a “global risk algorithm” to assist leaders to “see and sense the risk” across various theaters, which he described as a “global risk algorithm” with numerous variables and unequal weights. The conclusion is simple: in a department where it takes minutes and seconds to assess readiness, AI will be alluring where humans are slow.

Through warnings in the press, the leadership of Anthropic has cautioned on the existence of autonomous weapons and mass surveillance as it claims that strong mechanisms, without responsibility, may be directed against the system even in democratic nations. The counter argument by the Pentagon is that lawful authority is the limit and that speed is a strategy requirement. Between these stances is an empiric question which can hardly be resolved in procurement language, once a model has been put into the stack: who then actually is in charge of what the model makes?

spot_img

More from this stream

Recomended

Discover more from Modern Engineering Marvels

Subscribe now to keep reading and get access to the full archive.

Continue reading