AI & RoboticsNews

Humans must have override power over military AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


For years, U.S. defense officials and Washington think-tankers alike have debated whether the future of our military could — or should — look a little less human.

Already, the U.S. military has started to rely on technology that employs machine learning, artificial intelligence (AI), and big data — raising ethical questions along the way. While these technologies have countless beneficial applications, ranging from threat assessment to preparing troops for battle, they rightfully evoke concerns about a future in which Terminator-like machines take over.

But pitting man against machine misses the point. There’s room for both in our military future — as long as machines aid human decision-making, rather than replace it.

Military AI and machine learning are here

Machine learning technology, a type of AI that allows computer systems to process enormous data sets and “learn” to recognize patterns, has rapidly gained steam across many industries. The systems can comb through massive amounts of data from multiple sources and then make recommendations based on the patterns it senses — all in a matter of seconds.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

That makes AI and machine learning useful tools for humans who need to make well-thought-out decisions at a moment’s notice. Consider doctors, who may only have a few minutes with each patient but must make potentially life-altering diagnoses. Some hospitals are using AI and machine learning to identify heart disease and lung cancer before a patient ever shows symptoms.

The same goes for military leaders. When making a strategic choice that could have a human cost, officials must be able to process all the available data as quickly as possible to make the most informed decision.

AI is helping them do so. Some systems, for example, can take two-dimensional surveillance images and create a detailed, three-dimensional model of a space. That helps officials chart a safe path forward for their troops through a previously unexplored area.

A human thumbs-up

Whether in an emergency room or on the battlefield, these machine learning applications have one thing in common. They do not have the power to make an ultimate decision. In the end, it’s the doctor’s decision to make a diagnosis — and the officer’s decision to give an order.

Companies developing new military AI or machine learning technologies have a responsibility to help keep it that way. They can do so by outfitting their innovations with a few critical guardrails.

For one, humans must have the power to override AI at any point. Computer algorithms may be able to analyze piles of data and provide helpful recommendations for action. But machines can’t possibly grasp the complexity or novelty of the ever-changing factors influencing a strategic operation.

Only humans have the ability to think through the long-term consequences of military action. Therefore humans must be able to decline a machine’s recommendations.

Companies developing military AI should give users options to make decisions manually, without technological support; in a semi-automated fashion; or in a fully automated manner, with the ability to override. The goal should be to develop AI that complements — rather than eliminates the need for — uniquely human decision-making capabilities that enable troops to respond effectively to unforeseen circumstances.

A peaceful transfer of power

Military machine learning systems also need a clear chain of command. Most United States military technologies are developed by private firms that contract with the U.S. government. When the military receives those machines, it’s often forced to rely on the private firm for ongoing support and upkeep of the system.

That shouldn’t be the case. Companies in the defense industry should build AI systems and computer algorithms so that military officials can one day assume full control over the technology. That will ensure a smooth transition between the contractor and the military — and that the AI has a clear sense of goals free of any conflicts of interest.

To keep military costs low, AI systems and machine learning programs should be adaptable, upgradeable and easy to install across multiple applications. This will enable officials to move the technology from vehicle to vehicle and utilize the its analysis at a moment’s notice as well. It will also allow military personnel to develop more institutional knowledge about the system, enabling them to better understand and respond to the AI’s recommendations.

In the same vein, companies can continue working on algorithms that can explain why they made a certain recommendation — just as any human can. That technology could help to develop a better sense of trust in and more efficient oversight of AI systems.

Military decision-making doesn’t have to be a zero-sum game. With the right guardrails, AI systems and machine learning algorithms can help commanders make the most informed decisions possible — and stave off a machine-controlled future in the process.

Kristin Robertson is president of Space and C2 Systems at Raytheon Intelligence & Space.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Kristin Robertson, Space & C2
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!