AI & RoboticsNews

When AI is a tool and when it’s a weapon

The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we’re faced with complex and urgent questions about the balance of benefit and harm.

When most people ponder whether AI is good or evil, what they’re essentially trying to grasp is whether AI is a tool or a weapon. Of course, it’s both — it can help reduce human toil, and it can also be used to create autonomous weapons. Either way, the ensuing debates touch on numerous unresolved questions and are critical to paving the way forward.

Hammers and guns

When contemplating AI’s dual capacities, hammers and guns are a useful analogy. Regardless of their intended purposes, both hammers and guns can be used as tools or weapons. Their design certainly has much to do with these distinctions — you can’t nail together the frame of a house with the butt of a gun, after all — but in large part what matters is who’s wielding the object, what they plan to do with it, to whom or for whom, and why.

In AI, the gun-and-hammer metaphor applies neatly to two categories: autonomous military weapons and robotic process automation (RPA).

The prospect of AI-powered military-grade weapons can be terrifying. Like all advancements in weapons technology, their primary purpose is to kill people more efficiently, ostensibly while minimizing casualties on “our” side. Over centuries, humans have become increasingly distanced — literally and figuratively — from the direct impact of their weapons. The progression from swords to bows and arrows, muskets, rifles, mortars, bombers, missiles, and now drones represents technological advances that have moved us further and further away from our adversaries.

But humans have always been the ones releasing the arrow, pulling the trigger, or pressing the button. The question now is whether to give a killing machine decision-making power over who lives and who dies. That’s a new line to cross — and reiterates the need for human-in-the-loop AI design.

A small consolation: Among the people who seem most concerned about the spectre of autonomous weapons and the most ethical ways of addressing them are members of the Department of Defense (DoD). At an event in Silicon Valley in April 2019, the Defense Innovation Board (DIB) solicited wisdom around ethics and autonomous weapons from a collection of technologists, academics, retired military members, and activists. Recently, the DIB provided guidance to the DoD about ethics principles as they relate to both combat and non-combat AI systems, something DIB board chair and former Google CEO Eric Schmidt asserts can help lead to a national AI policy.

In contrast to military applications of AI, RPA is a solidly hammer-like in that it’s obviously a tool. It automates mundane and time-consuming tasks, freeing human workers to be more efficient and spend more time on critical work. Its rapid growth and massive market opportunity is arguably in part because instead of disrupting and killing off legacy industries, as technological innovations often do, it can actually give them new life, helping them stay competitive and relevant.

For companies — and even individual workers and teams — RPA can be empowering. But the downside is that automation often obviates existing jobs. Depending on the specific context, one might argue that a company could weaponize automation to gut its workforce, leaving throngs of workers adrift with their hard-won skills and experience suddenly obsolete.

Indeed, there’s concern that in this particular cycle of innovation — eliminating jobs and then creating new ones — too many are at risk of being left behind while the rich get richer. The most vulnerable include those currently in lower-paying jobs and people of color. Concerns are articulated in reports from a National Academy of Sciences journal (PNAS), MIT, PwC Global, McKinsey, and the AI Now Institute.

These are challenges that have in part driven VentureBeat’s BLUEPRINT events, which look at the tech industry between the coasts and the unique challenges therein. The issue of automation comes up frequently at these events. Many of the new jobs that will emerge after automation require learning what amounts to a trade, rather than earning a degree in computer science. Sometimes those jobs may be in the same field; for example, autonomous trucks could displace truck drivers, but there will still need to be someone on board handling logistics and communications, which is a job that a former trucker may be able to move into with a modest amount of new training. A broad reskilling and upskilling effort can help displaced workers scale up to a better job than they had before.

Back and forth goes the power differential. Automation is a tool, is a weapon, is a tool.

In the murky middle

Between the extremes of worker-aiding automation and killer drones lies almost all the other AI technology, and the middle is murky, to say the least. That’s where debate about AI becomes most difficult — but also comes into greater focus.

More than any other AI technology, facial recognition has shown clearly how an AI tool can be perverted into a weapon.

It is true that there are many beneficial uses of facial recognition technology. It can be used to diagnose genetic disorders and to help screen for potential human trafficking. Law enforcement can use it to quickly track down and apprehend a terrorist or, as with the Detroit Police Department’s video surveillance program, easily locate suspects. There are perfectly neutral uses, too, like using it to augment a rich online shopping experience.

But even some of those applications of AI have a troubling ethical downside. In Detroit, even though the police chief seems as principled, transparent, and aware of potential abuses as one can be, the system is ripe for abuse. The New York Police Department has already abused its facial recognition technology to nab a suspect, for example. Even if such a system is never abused, though, and police only use it lawfully, citizens may still perceive it as a weapon. That perception alone can erode trust and cause people to live in fear.

The use of facial recognition in policing and sentencing, but also across miscellaneous fields like hiring, is deeply problematic. We know that facial recognition technology is often less accurate applied to women and people of color, for instance, owing to models that were built with poor data sets. Data may also contain biases that are only magnified when applied in an AI context.

And there are reasonable moral objections to the very existence of facial recognition software, which has led multiple U.S. cities to ban the technology. Democratic presidential candidate Senator Bernie Sanders (I-VT) has called for a ban on police use of facial recognition software.

What’s of graver concern are the deeply alarming abuses of facial recognition by governments, such as the persecution of Uighur Muslims in China, which was made possible in part because of Microsoft research, and for which the company was publicly criticized. But Microsoft has also refused to sell facial recognition technology to law enforcement in California and to U.S. Immigration and Customs Enforcement (ICE). Amazon is on record as saying it will sell its Rekognition facial recognition technology to any government department (which would potentially include ICE) as long as they’re following the law.

All of the above calls into question the responsibility tech companies bear for selling facial recognition technology to governments.

In a session at Build 2019, Tim O’Brien, general manager of AI Programs at Microsoft, gave a slide presentation about how the company view AI ethics. It was mostly reassuring, because it showed Microsoft has been thinking hard about these issues and has drawn up principled internal guidelines. But during the Q&A session, O’Brien was asked to discuss issues around responsibility and regulation as pertains to a company like Microsoft.

“There are four schools of thought,” he said. To paraphrase what he laid out, a company can take one of the following approaches:

  1. We’re a platform provider, and we bear no responsibility (for what buyers do with the technology we sell them)
  2. We’re going to self-regulate our business processes and do the right things
  3. We’re going to do the right things, but the government needs to get involved, in partnership with us, to build a regulatory framework
  4. This technology should be eradicated

Microsoft, he said, subscribes to the third approach. “Depending on who you talk to, either in the public sector, customers, human rights activists — depending on … the interested parties you talk to, they’ll have a different point of view,” he said. “But we just keep pushing that rock uphill to try to educate policymakers on the importance.”

On the one hand, that’s a pragmatic and responsible stance for a company to take. But on the other hand, does that mean Microsoft won’t even entertain the possibility that it shouldn’t create a technology just because it can? Because if so, the company is removing itself from a crucial debate about whether some technologies should exist at all. Technology companies need to not just participate in regulating the technologies they create; they need to consider whether some technologies should ever find their way out of the R&D lab in the first place.

Is the journey the destination?

Because AI technologies can feel so huge, powerful, and untamable, the challenges they introduce can feel intractable. But they aren’t. A pessimistic view is that we’re doomed to be locked in an arms race with inevitably severe geopolitical ramifications. But we aren’t.

Structurally speaking, humanity has always faced these kinds of challenges, and the playbook is the same as it ever was. Bad actors have always and will always find ways to weaponize tools. It’s incumbent on everyone to push back and rebalance. For today’s AI challenges, perhaps the best place to start is with Oren Etzioni’s Hippocratic Oath for AI practitioners — an industry take on the medical profession’s “do no harm” commitment. The Oath includes a pledge to put human concerns over technological ones, to avoid playing god (especially when it comes to AI capabilities that can kill), to respect data privacy, and to prevent harm whenever possible.

The recent book Tools and Weapons: The Promise and the Peril of the Digital Age, by Microsoft president Brad Smith and his Senior director of external relations and communications Carol Ann Browne, revisits watershed moments in the company’s history and describes the problems and solutions around them. Smith’s perspective is unique, because his approach was less about the technologies themselves per se and more about the legal, ethical, moral, and practical concerns.

One anecdote in Tools and Weapons that stands out is when Microsoft ran into international legal issues with its data centers in Ireland. Essentially, the problem was about national sovereignty and data. What happens when one country wants to compel a tech company to turn over data that is stored in another country? “In some respects, it’s not a new issue,” wrote Smith. “For centuries, governments around the world agreed that a government’s power, including its search warrants, stopped at its border.” Smith wrote about the emergence of “mutual legal assistance treaties” (MLATs) that allowed for extradition and access to information across national borders — a way for countries to respect one another’s sovereignty while handling matters of significance, like criminal justice. But the people who crafted MLATs long ago could not have had any concept of cloud computing.

With data stored in the cloud — in data centers located in, say, Ireland on servers owned by Microsoft — the concept of international borders and access to that data was blown apart and left wide open for dangerous abuses. Smith wrote about how a law enforcement agency would try to bypass an MLAT by serving a tech company that was located in its jurisdiction, demanding data that was stored across an ocean in another country. Suddenly, tech companies were caught between sovereign governments.

Numerous laws that addressed related but not precisely applicable aspects of the problem, like wiretap laws, were simply insufficient to address this new technological advance. Lawmakers eventually created the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which, Smith writes, “balanced the international reach for search warrants that the DoJ wanted with a recognition that tech companies could go to court to challenge warrants when there was a conflict of laws.” But it took years of work; two major lawsuits involving Microsoft; officials from at least four governments on three different continents; and all three branches of the U.S. government, including the U.S. Supreme Court and two presidents, to get it done.

Cloud computing is a strong example of what was a seemingly intractable new problem. Companies like Microsoft, alongside the governments of multiple nations, had to grapple with rethinking how cloud computing affected international borders, law enforcement jurisdictions, and property ownership and privacy rights of private citizens.

Though the CLOUD Act saga was particularly complex and protracted, the fundamental challenge of addressing new problems created by technological advances comes up multiple times throughout Tools and Weapons, around cybersecurity, the internet, social media, surveillance, opportunity gaps caused (and potentially solved) by technologies like broadband, and AI. In Smith’s retelling, the process of finding solutions was always similar. It required all stakeholders to act in good faith. They had to listen to concerns and objections, and they had to work together, often with rival companies — and in many cases, multiple international governments — to craft new laws and regulations. Sometimes the solutions required further technological innovations.

Unsurprisingly, Microsoft comes off looking quite favorable in Smith’s recollections in , and the text doesn’t provide a perfect playbook by any means. But it does serve as a reminder that the tech world has dealt with the same essential type of problems time and time again, and that people have worked hard and thoughtfully to find solutions.

Dealing with AI and its promises and problems requires moral outrage, political will, responsible design, careful regulation, and a means of holding the powerful accountable. Those in power — in AI, primarily the biggest tech companies in the world — need to act in good faith, be willing to listen, understand how and when tools can feel like weapons to people, and have the moral fortitude to do the right thing.


Author: Seth Colaner
Source: Venturebeat

Related posts
AI & RoboticsNews

Stability AI sows gen AI discord with Stable Artisan

AI & RoboticsNews

Runway’s LA film festival marked an inflection point for AI movies

DefenseNews

US Navy’s submarine fleet is too small. Here’s how selling some may help.

Cleantech & EV'sNews

Amazon puts first electric semi trucks into ocean freight operation

Sign up for our Newsletter and
stay informed!