AI & RoboticsNews

AI Weekly: The Russia-Ukraine conflict is a test case for AI in warfare

Join today’s leading executives online at the Data Summit on March 9th. Register here.


As Russia’s invasion of Ukraine continues unabated, it’s becoming a test case for the role of technology in modern warfare. Destructive software — presumed to be the work of Russian intelligence — has compromised hundreds of computers at Ukrainian government agencies. On the other side, a loose group of hackers has targeted key Russian websites, appearing to bring down webpages for Russia’s largest stock exchange as well as the Russian Foreign Ministry.

AI, too, has been proposed — and is being used — as a way to help decisively turn the tide. As Fortune writes, Ukraine has been using autonomous Turkish-made TB2 drones to drop laser-guided bombs and artillery strikes. Russia’s Lantset drone, which the country reportedly used in Syria and could use in Ukraine, has similar capabilities, enabling it to navigate and crash into preselected targets.

AI hasn’t been consigned strictly to the battlefield. Social media algorithms like TikTok’s have become a central part of the information war, surfacing clips of attacks for millions of people. These algorithms have proven to be a double-edge sword, amplifying misleading content like video game clips doctored to look like on-the-ground footage and bogus livestreams of invading forces.

Meanwhile, Russian troll farms have used AI to generate human faces for fake, propagandist personas on Twitter, Facebook, Instagram, and Telegram. A campaign involving around 40 false accounts was recently identified by Meta, Facebook’s parent company, which said that the accounts mainly posted links to pro-Russia, anti-Ukraine content.

Some vendors have proposed other uses of the technology, like developing anomaly detection apps for cybersecurity and using natural language processing to identify disinformation. Snorkel AI, a data science platform, has made its services available for free to “support federal efforts” to “analyze signals and adversary communications, identify high-value information, and use it to guide diplomacy and decision-making,” among other use cases.

Some in the AI community support the use of the technology to a degree, pointing to AI’s potential to promote cyber defense and denial-of-service attacks, for example. But others decry their application, arguing that it sets a harmful, ethically problematic precedent.

“We urgently must identify the vulnerabilities of today’s machine learning … algorithms, which are now weaponized by cyberwarfare,” wrote Lê Nguyên Hoang, an AI researcher who’s helping to build an open source video recommendation called Tournesol, on Twitter.

Kai-Fu Lee, it would seem, rightly predicted that AI would be the third revolution in warfare, after gunpowder and nuclear weapons. Autonomous weapons are one aspect, but AI also has the potential to scale data analysis, misinformation, and content curation beyond what was possible in major conflicts historically.

As the Brookings Institute points out in a 2018 report, advances in AI are making synthetic media quick, cheap, and easy to produce. AI audio and video disinformation — “deepfakes” — are already available through apps like Face2Face, which allows for one person’s expressions to be mapped onto another face in a video. Other tools can manipulate media of any world leader, or even synthesize street scenes to appear in a different environment.

Elsewhere, demonstrating AI’s analytics potential, geospatial data firm Spaceknow claims it was able to detect military activity in the Russian town of Yelna beginning last December, including the movement of heavy equipment. The Pentagon’s Project Maven — to which Google controversially contributed expertise — taps machine learning to detect and classify objects of interest in drone footage.

Reading the writing on the wall, the North Atlantic Treaty Organization (NATO) — which activated its Response Force for the first time last week as a defensive measure in response to Russia’s assault — last October launched an AI strategy and a $1 billion fund to develop new AI defense technologies. In a proposal, NATO emphasized the need for “collaboration and cooperation” among members on “any matters relating to AI for transatlantic defense and security,” including as they relate to human rights and humanitarian law.

AI tech in warfare, for better or worse, seems likely to become a fixture of conflicts beyond Ukraine. A critical mass of countries have thrown their weight behind it, including the U.S. — the Department of Defense (DoD) plans to invest $874 million this year in AI-related technologies as a part of the army’s $2.3 billion science and technology research budget.

The intractable challenge will be ensuring — assuming it’s even possible — that AI is applied ethically, then, in these circumstances. In an article for The Cove, the professional development platform for the Australian Army, Aaron Wright explores whether AI for war can by definition be ethical. He points out that the impact of weapons of war often isn’t fully understood until after the weapons themselves are deployed, arguing that the members of The Manhattan Project felt justified in their work to invent a nuclear bomb.

“Ultimately, whether one considers the use of AI in war to be ethical … [relies] upon your inherent optimism toward the field of AI,” he says. “You may take a utilitarian approach and consider all the lives saved by precise and calculated robot strikes without the loss of human soldiers, or take a virtue ethics approach and lament killer robots erasing humans from existence based upon how a sequence of numbers arranges inside their internal algorithms … As the use of AI on the battlefield cannot seemingly be prevented … careful and rigorous standards … are a required step to make AI for war ethical, but not a guarantee that it will be so.”

The DoD has taken a stab at this, offering guidelines for AI military technology contractors that recommend that suppliers conduct harms modeling as well as address the effects of flawed data, establish plans for system auditing, and confirm that new data doesn’t degrade system performance. Whether contractors — and, perhaps more importantly, adversaries — adhere to these sorts of guidelines is another matter, however. So is whether the supposed advantage AI offers in warfare outweighs the consequences. Ukraine will yield some answers; we can only hope it does so with minimal casualties.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Senior Staff Writer

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!