Between January 2023 and December 2024, Chinese military purchasers put 2,857 AI-related award announcements. That single procurement footprint is more about the current competition in AI power than anything a terms-of-service dispute: a single system is formalizing acquisition at scale, and the other is still deliberating on whether the work is an insider or an outlier.

The friction is not new. The backlash in 2018 over the Project Maven project demonstrated how the politics of workforce can easily come into conflict with national-security imperative. They were circulated by thousands of Google workers in the form of a letter which warned that machine learning that is applied to decipher video can be used to a better purpose in enhancing the targeting of drone attacks and urged the company to get out of the “warfare technology.” The episode was one of the first indicators that commercial AI talent would not necessarily be subject to the needs of defense, even in the case where the technical effort appeared as analysis instead of weapons.
The posture of the government is what will change by 2026. The American system is being influenced more and more by the fact that the state-of-the-art AI is too close to critical infrastructure to be controlled only by the policies of private entities. It leads to the push toward standardizing the ways of using models, and by whom. Most explicitly, the Pentagon line has been said to be an “all lawful purposes” standard of what government should do, accompanied by the caution that vendors may not self-define the operating rules once their systems have been implemented in defense spaces.
That requirement collides with another fact within prime model developers: usage constraints are not only branding, but architecture. Guardrails are designed into training options, assessment, tracking and the “refuse/comply” logic which defines outputs. Where a department demands a wider latitude, the bargaining is not just a legal phraseology to it may need alterations to the process of how a model is tested, audited, and deployed, and in the edge cases where the reliability and accountability of the system can be more difficult to establish.
A different concern, which is more acute internally, is not autonomous weapons, but scope creep. According to one summary, the current “law of mass surveillance does not even consider AI,” which means that the automation can increase the pace of both gathering and processing of data before the mechanisms of control created in previous centuries are fully exhausted. It is at least a partially ethical debate, but it is an administrative one as well: what body holds responsibility in the event that AI-generated inferences are used to produce consequential decisions, and what paper trail is present in the instance that the inferences are inaccurate?
Pentagon has not been resting on its laurels in governance. The Chief Digital and Artificial Intelligence Office positions its mission as “enable, “”speed” and “scale,” which is a three-word statement that views AI as a company adoption issue, rather than a research project. Its readiness material is focused on “assured and governed application” which is compliant with DoD Responsible AI Principles and asks operational questions instead of philosophical ones: output accountability, stakeholder support throughout the lifecycle, and quantifiable alignment to a Responsible AI toolkit.
Nonetheless, structure has a way of determining results. In 2025, a memo was issued to strategically realign the CDAO within USD(R&E), a step that, some observers convinced, might create more bureaucratic obstacles to the deployment of AI solutions. The history is important: the department has had multiple restructures of AI leadership between the JAIC and the CDAO with each restructuring having an implicit message on priority, authority, and those with the power to compel changes across the services.
Capacity, however, is the point where the argument shifts its focus on values to leverage. The U.S. has been striving to introduce various commercial AI companies into classified settings, which will lessen reliance on one vendor and decrease bargaining power that is associated with exclusivity. The fact that it is a technical and security gambit, but also an indicator that the Pentagon is already planning an ecosystem in which models are not only commodities interchangeable with each other but also plugged into government networks according to the demands of government policies.
The emphasis in the model of China as described in the CSET analysis above, however, is elsewhere: habitude formation. Out of the 2,857 notices analyzed, 2,090 had contract values of up to 535.5 million and the number of notices also jumped 16% (1,039 in May–December 2023 to 1,249 in May 2024) with growth in total values. It was also the question of the supplier mix: nontraditional suppliers were chosen 764 times, and the fact that the pipeline is aimed at dragging civilian capability into the defense demand without a public debate on the permits has been stressed.
The abiding lesson of the engineering is not ideological, but procedural. The ability to convert governance into repeatable procurement, the capability to prove models in mission-like contexts, and the capacity to make operational guardrails and not transform them into veto points are increasingly defining AI advantage. It is not the construction of models that is difficult, but the construction of the machinery that will render their application a custom.

