AI & RoboticsNews

The AI arms race has us on the road to Armageddon

Join Transform 2021 this July 12-16. Register for the AI event of the year.


It’s now a given that countries worldwide are battling for AI supremacy. To date, most of the public discussion surrounding this competition has focused on commercial gains flowing from the technology. But the AI arms race for military applications is racing ahead as well, and concerned scientists, academics, and AI industry leaders have been sounding the alarm.

Compared to existing military capabilities, AI-enabled technology can make decisions on the battlefield with mathematical speed and accuracy and never get tired. However, countries and organizations developing this tech are only just beginning to articulate ideas about how ethics will influence the wars of the near future. Clearly, the development of AI-enabled autonomous weapons systems will raise significant risks for instability and conflict escalation. However, calls to ban these weapons are unlikely to succeed.

In an era of rising military tensions and risk, leading militaries worldwide are moving ahead with AI-enabled weapons and decision support, seeking leading-edge battlefield and security applications. The military potential of these weapons is substantial, but ethical concerns are largely being brushed aside. Already they are in use to guard ships against small boat attacks, search for terroristsstand sentry, and destroy adversary air defenses.

For now, the AI arms race is a cold war, mostly between the U.S., China, and Russia, but worries are it will become more than that. Driven by fear of other countries gaining the upper hand, the world’s military powers have been competing by leveraging AI for years — dating back at least to 1983 — to achieve an advantage in the balance of power. This continues today. Famously, Russian President Vladimir Putin has said the nation that leads in AI will be the “ruler of the world.”

How policy lines up behind military AI use

According to an article in Salon, diverse and ideologically-distinct research organizations including the Center for New American Security (CNAS), the Brookings Institution, and the Heritage Foundation have argued that America must ratchet up spending on AI research and development. A Foreign Affairs article argues that nations who fail to embrace leading technologies for the battlefield will lose their competitive advantage. Speaking about AI, former U.S. Defense Secretary Mark Esper said last year, “History informs us that those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” Indeed, leading militaries are investing heavily in AI, motivated by a desire to secure military operational advantages on the future battlefield.

Civilian oversight committees, as well as militaries, have adopted this view. Last fall, a U.S. bipartisan congressional report called on the Defense Department to get more serious about accelerating AI and autonomous capabilities. Created by Congress, the National Security Commission on AI (NSCAI) recently urged an increase in AI R&D funding over the next few years to ensure the U.S. is able to maintain its tactical edge over its adversaries and achieve “military AI readiness” by 2025.

In the future, warfare will pit “algorithm against algorithm,” claims the new NSCAI report. Although militaries have continued to compete using weapon systems similar to those of the 1980s, the NSCAI report claims: “the sources of battlefield advantage will shift from traditional factors like force size and levels of armaments to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.” It is possible that new AI-enabled weapons would render conventional forces near obsolete, with rows of decaying Abrams tanks gathering dust in the desert in much the same way as mothballed World War II ships lie off the coast of San Francisco. Speaking to reporters recently, Robert O. Work, vice chair of the NSCAI said of the international AI competition: “We have got … to take this competition seriously, and we need to win it.”

The accelerating AI arms race

Work to incorporate AI into the military is already far advanced. For example, militaries in the U.S., Russia, China, South Korea, the United Kingdom, Australia, Israel, Brazil, and Iran are developing cybersecurity applications, combat simulations, drone swarms, and other autonomous weapons.

Caption: The Russian Uran-9 is an armed robot. Credit: Dmitriy Fomin via Wikimedia Commons. CC BY 2.0.

A recently completed “global information dominance exercise” by U.S. Northern Command pointed to the tremendous advantages the Defense Department can achieve by applying machine learning and artificial intelligence to all-domain information. The exercise integrated information from all domains including space, cyberspace, air, land, sea, and undersea, according to Air Force Gen. Glen D. VanHerck.

Gilman Louie, a commissioner on the NSCAI report, is quoted in a news article saying: “I think it’s a mistake to think of this as an arms race” — though he added, “We don’t want to be second.”

A dangerous pursuit

West Point has started training cadets to consider ethical issues when humans lose some control over the battlefield to smart machines. Along with the ethical and political issues of an AI arms race are the increased risks of triggering an accidental war. How might this happen? Any number of ways, from a misinterpreted drone strike to autonomous jet fighters with new algorithms.

AI systems are trained on data and reflect the quality of that data along with any inherent biases and assumptions of those developing the algorithms. Gartner predicts through 2023, up to 10% of AI training data will be poisoned by benign or malicious actors. That is significant, especially considering the security vulnerability of critical systems.

When it comes to bias, military applications of AI are presumably no different, except that the stakes are much higher than whether an applicant gets a good rate on car insurance. Writing in War on the Rocks, Rafael Loss and Joseph Johnson argue that military deterrence is an “extremely complex” problem — one that any AI hampered by a lack of good data will not likely be able to provide solutions for in the immediate future.

How about assumptions? In 1983, the world’s superpowers drew near to accidental nuclear war, largely because the Soviet Union relied on software to make predictions that were based on false assumptions. Seemingly this could happen again, especially as AI increases the likelihood that humans would be taken out of decision making. It is an open question whether the risks of such a mistake are higher or lower with greater use of AI, but Star Trek had a vision in 1967 for how this could play out. The risks of conflict had escalated to such a degree in a “Taste of Armageddon” that war was outsourced to a computer simulation that decided who would perish.

Source: Star Trek, A Taste of Armageddon.

There is no putting the genie back in the bottle. The AI arms race is well underway and leading militaries worldwide do not want to be in second place or worse. Where this will lead is subject to conjecture. Clearly, however, the wars of the future will be fought and determined by AI more than traditional “military might.” The ethical use of AI in these applications remains an open-ended issue. It was within the mandate of the NSCAI report to recommend restrictions on how the technology should be used, but this was unfortunately deferred to a later date.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!