It’s one thing to have computer intelligence beat people in a video game dogfight. It is another to see it execute a vertical climb at supersonic velocity, within close proximity of 2,000 feet of a human-flown F-16, and sustain itself in an actual aerial engagement. That is exactly what DARPA’s Air Combat Evolution (ACE) program did in April 2024, when its X-62A VISTA a highly modified F-16 engaged a manned fighter above Edwards Air Force Base.

The X-62A’s AI isn’t just an autopilot. It is a machine learning-based combat system that has been trained on massive databases of historical flight and engagement scenarios and can make tactical choices in microseconds. As Lt. Col. Ryan Hefron explained, “The purpose of the test was to demonstrate we can safely test these AI agents in a safety-critical air combat environment.” Safety first: the human pilot occupied the AI-led cockpit, poised to take control, but never did during the 21 test flights leading up to the dogfights.
The development curve of the program follows the path of AI in other high-risk applications beginning in simulation, proceeding through controlled real-world testing, and ending in high-speed, high-aspect battles. In five initial simulator games against human pilots, the AI compiled a perfect record, free of the physical and moral limitations that restrict human movement. By December of 2022, it was conducting live training sorties, gradually building to sophisticated offensive and defensive operations. The September 2023 battles were the first in-visual-range, human-against-AI dogfighting in real fighter planes, with both sides making full-out nose-to-nose runs at 1,200 miles per hour.
What distinguished ACE was not only the agility of the AI, but its adaptability. Engineers streamed over 100,000 lines of flight-critical software changes throughout the program, sharpening decision-making rules and adding layers of safety including flight envelope protection, aerial and ground collision avoidance, and compliance with combat training regulations. By doing this, as noted by the Department of Defense, the team was able to “pioneer new methods to train and test AI agent compliance with safety requirements” prior to establishing formal verification guidelines for AI autonomy.
The technological advantage comes down to time. Former Air Force Secretary Frank Kendall explained, The best pilot you’re ever going to find is going to take a few tenths of a second to do something. The AI is going to do it in a microsecond it’s gonna be orders of magnitude better performance. And those times actually matter. In air combat, where positional advantage can shift in less than a second, the AI’s speed of perception and action is a decisive factor.
The ACE program is not an isolated experiment. It is a cornerstone in a broader U.S. push toward Collaborative Combat Aircraft (CCA) autonomous or semi-autonomous platforms designed to operate alongside manned fighters. The Air Force is looking at hundreds of such systems, and the Navy is also developing carrier-capable variants, which Anduril, Northrop Grumman, Boeing, and General Atomics are already under contract for conceptual designs. These “loyal wingmen” will have to work seamlessly with human pilots, exchange sensor information, and perform coordinated tactics all tasks being tested in ACE’s AI-human dogfights.
Other countries are keen to deploy similar technology. Saab’s Centaur AI pilot has already been flown on Gripen E fighters, battling human pilots without the need for a bespoke testbed aircraft. Turkey’s KAAN fighter will feature an onboard AI library, while China’s Red Eye program has already proved AI superiority in its own tests. The overlap of these programs hints at an imminent battlespace where autonomous decision-making is not a sideline but a core pillar of airpower.
In addition to its effects on maneuvering, AI is also transforming the command level of air warfare. In August 2025, the Pentagon tested Starsage tactical control software from Raft AI, which served as an airborne battle manager for F-16s, F/A-18s, and F-35s. Starsage shortened decision cycles from minutes to seconds, providing real-time “picture calls” of the enemy formations and dynamically updating mission plans. As CEO Shubhi Mishra succinctly put it, “It’s just data, and then execution on the data.” The tie-in between such high-order tactical AI and ACE’s instantaneous dogfighting autonomy promises an entirely networked, AI-enabled kill chain.
For the Air Force, though, the test is more than technical. As Col. Hefron put it, ACE is “really about building trust in responsible AI.” That is, demonstrating to pilots, commanders, and policymakers that autonomous systems can be both deadly and safe, able to perform aggressive combat maneuvers while adhering to engagement rules and moral limits. General David Allvin has called AI integration into everyday operations a “mixed bag,” highlighting that institutional adoption will be as important as performance of the software in determining whether AI pilots become the norm in the next generation of aerial warfare.

