DefenseNews

Air Force official’s musings on rogue drone targeting humans go viral

U.S. Air Force

WASHINGTON — The U.S. Air Force walked back comments reportedly made by a colonel regarding a simulation in which a drone outwitted its artificial intelligence training and killed its handler, after the claims went viral on social media.

Air Force spokesperson Ann Stefanek said in a June 2 statement no such testing took place, adding that the service member’s comments were likely “taken out of context and were meant to be anecdotal.”

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “This was a hypothetical thought experiment, not a simulation.”

The killer-drone-gone-rogue episode was initially attributed to Col. Tucker “Cinco” Hamilton, the chief of AI testing and operations, in a recap from the Royal Aeronautical Society’s FCAS23 Summit in May. The summary was later updated to include additional comments from Hamilton, who said he misspoke at the conference.

“We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” Hamilton was quoted as saying in the Royal Aeronautical Society’s update. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

Hamilton’s assessment of the plausibility of rogue-drone scenarios, however theoretical, coincides with stark warnings in recent days by leading tech executives and engineers, who wrote in an open letter that the technology has the potential to wipe out humanity if left unchecked.

Hamilton is also commander of the 96th Operations Group at Eglin Air Force Base in Florida, which falls under the purview of the 96th Test Wing. Defense News on Thursday reached out to the test wing to speak to Hamilton, but was told he was unavailable for comment.

In the original post, the Royal Aeronautical Society said Hamilton described a simulation in which a drone fueled by AI was given a mission to find and destroy enemy air defenses. A human was supposed to give the drone its final authorization to strike or not, Hamilton reportedly said.

But the drone algorithms were told that destroying the surface-to-air missile site was its preferred option. So the AI decided that the human controller’s instructions not to strike were getting in the way of its mission, and then attacked the operator and the infrastructure used to relay instructions.

“It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying. “We trained the system, ‘Hey don’t kill the operator, that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The Defense Department has for years embraced AI as a breakthrough technology advantage for the U.S. military, investing billions of dollars and creating the the Chief Digital and Artificial Intelligence Office in late 2021, now led by Craig Martell.

The Pentagon is seen from Air Force One as it flies over Washington, March 2, 2022.

More than 685 AI-related projects are underway at the department, including several tied to major weapon systems, according to the Government Accountability Office, a federal auditor of agencies and programs. The Pentagon’s fiscal 2024 budget blueprint includes $1.8 billion for artificial intelligence.

The Air and Space forces are responsible for at least 80 AI endeavors, according to the GAO. Air Force Chief Information Officer Lauren Knausenberger has advocated for greater automation in order to remain dominant in a world where militaries make speedy decisions and increasingly employ advanced computing.

The service is ramping up efforts to field autonomous or semiautonomous drones, which it refers to as collaborative combat aircraft, to fly alongside F-35 jets and a future fighter it calls Next Generation Air Dominance.

The service envisions a fleet of those drone wingmen that would accompany crewed aircraft into combat and carry out a variety of missions. Some collaborative combat aircraft would conduct reconnaissance missions and gather intelligence, others could strike targets with their own missiles, and others could jam enemy signals or serve as decoys to lure enemy fire away from the fighters with human pilots inside.

The Air Force’s proposed budget for FY24 includes new spending to help it prepare for a future with drone wingmen, including a program called Project Venom to help the service experiment with its autonomous flying software in F-16 fighters.

Under Project Venom, which stands for Viper Experimentation and Next-gen Operations Model, the Air Force will load autonomous code into six F-16s. Human pilots will take off in those F-16s and fly them to the testing area, at which point the software will take over and conduct the flying experiments.

The Royal Aeronautical Society’s post on the summit said Hamilton “is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight.”

The Air Force plans to spend roughly $120 million on Project Venom over the next five years, including a nearly $50 million budget request for FY24 to kick off the program. The Air Force told Defense News in March it hadn’t decided which base and organization will host Project Venom, but the budget request asked for 118 staff positions to support the program at Eglin Air Force Base.

In early 2022, as public discussions about the Air Force’s plans for autonomous drone wingmen gathered steam, former Air Force Secretary Deborah Lee James told Defense News that the service must be cautious and consider ethical questions as it moves toward conducting warfare with autonomous systems.

James said that while the AI systems in such drones would be designed to learn and act on their own, such as taking evasive maneuvers if it were in danger, she doubted the Air Force would allow an autonomous system to shift from one target to another on its own if that would result in human deaths.


Author: Stephen Losey
Source: DefenseNews

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!