DefenseNews

To Defeat Enemy Drone Swarms, Troops May Have to Take a Back Seat to Machines, General Says

Sgt. Nicholas Maxim, acting as a member of the opposing force, builds and launches a simulated drone swarm during Exercise Dynamic Front 18 at the Joint Multinational Simulation Center, Grafenwoehr, Germany, in March, 2018. (U.S. Army/Staff Sgt. Kathleen V. Polanco, 7th Army Training Command)
Sgt. Nicholas Maxim, acting as a member of the opposing force, builds and launches a simulated drone swarm during Exercise Dynamic Front 18 at the Joint Multinational Simulation Center, Grafenwoehr, Germany, in March, 2018. (U.S. Army/Staff Sgt. Kathleen V. Polanco, 7th Army Training Command)

The Army‘s top modernization official said Monday that the Pentagon may have to relax its rules on human control over artificial intelligent combat systems to defeat swarms of enemy drones that often move too fast for soldiers to track.

All branches of the U.S. military have expressed interest in using artificial intelligence, or AI, for faster target recognition; however, the Defense Department until now has stressed that humans, not machines, will always make the decision to fire deadly weapons.

But as small unmanned aerial systems, or UAS, proliferate around the world, Army modernization officials are recognizing that swarms of fast-moving drones will be difficult to defeat without highly advanced technology.

Read Next: Futuristic ‘Defiant X’ in Running to Become Army’s Future Long-Range Assault Helicopter

“It just becomes very hard when you are talking about swarms of small drones — not impossible, but harder,” Gen. John Murray, head of Army Futures Command, told an audience Monday during a webinar at the Center for Strategic & International Studies.

Murray said that Pentagon leaders may have to have conversations about how much human control of AI is needed to be safe but still effective in countering threats such as drone swarms.

“When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up,” he said. “How much human involvement do you actually need when you are [making] nonlethal decisions from a human standpoint?”

The Army is experimenting with AI for faster, more accurate target recognition. In the past, the service’s mechanized combat units would give potential new tank gunners a test using flashcards with pictures of armored vehicles used around the world.

“New gunners got tests with flashcards — pictures of various armored vehicles; you went through 15, 20, 25, 30 of these. If a soldier got 80 percent of them right, he was put in the gunner seat of a very lethal vehicle,” Murray said.

During an Army exercise called Project Convergence at Yuma Proving Ground, Arizona, in September, modernization officials tested AI-enhanced systems combined with low-Earth orbiting satellites and other technologies to drastically reduce the time it takes to identify, track and destroy incoming aerial threats.

“The operators we trained out at Project Convergence were routinely getting 99 to 98 percent correct, so in many ways AI has the ability to make us safer,” Murray said. “If you think about a future battlefield, the one way I have described it in the past is it will be hyperactive. I think decisions will have to be made at such a pace that it is going to be incredibly difficult for a human decision maker to keep up with it.”


Source: Military News

Related posts
AI & RoboticsNews

Microsoft brings transactional databases to Fabric to boost AI agents

AI & RoboticsNews

Microsoft’s new AI agents support 1,800 models (and counting)

AI & RoboticsNews

Open source vector database vendor targets enterprise AI costs with cloud update

AI & RoboticsNews

OpenText expands AI capabilities to improve enterprise productivity and ROI

Sign up for our Newsletter and
stay informed!