The head of a company developing artificial intelligence for the US Air Force (USAF) says its algorithm has shown capacity to improvise and develop unexpected tactics during air combat exercises.
San Diego-based Shield AI has partnered with the Pentagon on the Air Combat Evolution (ACE) project, which is using a specially-modified Lockheed Martin F-16 fighter to push the limits of autonomous flight.
The ACE programme – a joint effort between the USAF and the Defense Advanced Research Projects Agency (DARPA) – made headlines recently when it pitted human fighter pilots against an AI-controlled jet for the first time.
While DARPA has declined to reveal many details about the man-against-machine dogfighting drills, the secretive agency has described the effort as a success, with Shield AI’s algorithm safely executing at least 21 sorties over 10 months. The demonstrations occurred between December 2022 and September 2023 over Edwards AFB in California.
Those flights featured “increasingly complex air combat scenarios”, according to DARPA, including offensive high-aspect nose-to-nose engagements that involved the dogfighting jets passing within 610m (2,000ft) of each other at speeds of 1,043kt (1,931km/h).
Officials with the USAF and DARPA say the programme intends to establish the competence of AI pilots and to build trust in the AI system, rather than being an effort to create a better dogfighter.
However, in an interview with FlightGlobal, Shield AI co-founder Brandon Tseng reveals that the company’s algorithm showed a strong aptitude for dogfighting – officially known as within-visual-range (WVR) combat.
“It will improvise and come up with novel tactics,” Tseng told FlightGlobal on 7 April at the annual Special Operations Forces Week conference in Tampa, Florida.
Shield AI’s artificial intelligence agent was trained using machine learning techniques, reinforced with input from flesh-and-blood USAF aviators. Its development involved repeatedly subjecting the AI system to simulated air combat scenarios, with the outcome reviewed and critiqued by fighter pilots.
The result was a flexible algorithm capable of completing multiple fighter aircraft mission sets, including combat escort and WVR engagement.
“Every single one of those missions can be learned,” says Tseng, who previously served in the US Navy as a SEAL commando.
“Learning” those skills requires the AI system complete millions of simulated runs, including battling against other competing algorithms. Tseng describes that process as a gladiator arena, in which the best-performing AI agents continuously advance. That Darwinian approach apparently yielded a product that was overwhelmingly successful in a simulated environment, even against experienced USAF pilots.
“Lots of pilots have gone up against it, and we win,” Tseng says. “We win 99.9% of the time.”
That figure refers to earlier engagements in a simulated environment. DARPA has declined to reveal specific outcomes of the physical drills between the AI and USAF pilots.
The Shield AI agent achieved that success rate by quickly employing tactics more dangerous or aggressive than human pilots would likely attempt. For instance, the technology developed a preference for using the F-16’s 20mm Vulcan cannon – typically a last resort for modern combat pilots.
Through repeated simulated close-range battles, the software determined that the tactic resulted in a 90% enemy-kill rate when executed within the first 10sec of a dogfight, Tseng says.
“It became so good at controlling the gun, and the aircraft, that it would always basically go for the kill… in a head-on engagement,” he adds. “It is a very dangerous manoeuvre to try and do that head-on.”
Close-range fighter tactics generally call for gaining the upper hand by coming in behind enemy aircraft – where its vulnerable aft portion is exposed.
“Human pilots try to get behind the aircraft like Top Gun,” Tseng says, referencing the popular film. “So this was a novel behaviour.”
Notably, such close-in fighter combat would be uncommon in the modern era, where pilots more typically engage enemy aircraft beyond visual range, relying on precision-guided missiles.
The results of the ACE project were so profound that the team behind it was a finalist for the 2023 Collier Trophy recognising excellence in aeronautics.
Air force secretary Frank Kendall called the ACE results a “transformational moment” for military aviation.
“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now,” Kendall said.
To demonstrate the technology’s potential, Kendall recently rode in the front seat of the X-62 VISTA during a flight controlled by Shield AI’s algorithm.
Kendall is among Washington officials leading a generational modernisation of the USA’s military forces, with autonomy at the centre of the push.
Bedevilled by recruiting challenges, limited resources and demand for air and sea power, the Pentagon is seeking to multiply its available forces with legions of new uncrewed platforms.
Kendall has made autonomy an integral part of the USAF’s future force design, particularly in the realm of air combat. He has repeatedly described the service’s forthcoming sixth-generation air superiority fighter as a “family of systems” rather than a single platform.
While the manned Next Generation Air Dominance fighter is to be the nucleus of that family, the highly-classified aircraft is expected to team with numerous uncrewed assistants, which could directly engage enemy targets, jam communications and scramble radars.
Such autonomous jets will need agents like that developed by Shield AI. Tseng is bullish on that prospect, noting the company’s technology “was built from the beginning to work with every aircraft”.
Despite promising results, challenges associated with deploying highly-capable AI fighter pilots remain significant.
DARPA says that between 2000 and 2016 there were 27 mid-air collisions involving US F-16s and Boeing F/A-18s engaged in WVR combat training. Those incidents destroyed 30 aircraft, significantly damaged another 23 and killed 12 aviators.
Given such dangers, DARPA says establishing trust between pilots and the nascent AI technology will be foundational to the success of any autonomous flight programme.
Ultimately though, risks associated with human flight may accelerate adoption of AI-powered military aircraft. Autonomous jets promise to be far cheaper to build and operate than conventional fighters, which require complex safety features like ejection seats and pilots who must complete years of expensive training.
Shield AI has already signed a cooperative research and development agreement with Kratos, an industry leader in developing autonomous jets. The firms completed a live test flight with Kratos’s MQM-178 target drone less than 180 days after the deal was signed.
Comparing that to the two-and-a-half year ramp for the X-62 ACE project with DARPA, Tseng predicts the pace of autonomous flight will only continue to accelerate.
“It’s not theoretical in any way,” he says.