AI & RoboticsNews

AI Weekly: Calls for facial recognition moratorium highlight need for protection from surveillance tech

The debate over whether to ban or place moratoriums on the use of facial recognition started last year in cities like San Francisco, but the debate reignited this week when Alphabet and Google CEO Sundar Pichai said he’s open to a moratorium on facial recognition.

“It [facial recognition regulation] can be immediate, but maybe there’s a waiting period before we really think about how it’s being used,” Pichai said. “It’s up to governments to chart the course [for the use of such technology].” He spoke in favor of a five-year moratorium the European Commission is considering in Brussels. Later in the week, at the World Economic Forum in Davis, he spoke in favor of the regulation of AI while emphasizing that AI needs to get used to get better.

Microsoft, on the other hand, doesn’t appear to like the idea of a delay in the use of facial recognition. Microsoft chief legal officer and president Brad Smith pointed to positive use cases like finding missing children, and seemed to contend that bans and moratoriums can stand in the way of progress. “There is only one way at the end of the day to make technology better, and that is to use it,” he said.

Microsoft rejected the idea of a moratorium in its home state of Washington last year, too. Unlike Google, Microsoft and Amazon are actively selling facial recognition services to businesses and governments. As those two tech giants move toward legitimizing the sale and proliferation of facial recognition software in their home state of Washington and around the world, about a dozen states are considering facial recognition regulation.

It’s important to understand that facial recognition is becoming a burgeoning industry with some of the best-funded AI startups in the world. And the AI industry as a whole is seeing huge growth. CB Insights and the National Venture Capital Association found that AI startups raised record amounts in 2019 in the U.S. and worldwide. Analysis in the 2019 AI Index shows that more than $70 billion was invested in artificial intelligence businesses last year, with facial recognition ranked high among areas of investment. Businesses are exploring uses cases for AI that range from patient or employee verification to analysis of job candidates in video interviews, but another moratorium demand this week points out its use in surveillance.

The United Nations called for a moratorium on private surveillance technology worldwide following an investigation into the May 2018 hacking of Amazon CEO Jeff Bezos’ iPhone X. The investigation concluded that hackers gained access to the entirety of his phone’s data through the delivery of a malicious MP4 video by Saudi Arabian crown prince Mohammad Bin-Salman via WhatsApp. The hack was part of an attempt to influence coverage. Five months later, columnist Jamal Khashoggi was killed by Saudi operatives.

PBS NewsHour’s Nick Schifrin asked UN special rapporteur on extrajudicial executions Agnès Callamard if it’s too late for a moratorium — if the spread of surveillance technology has already crossed the rubicon. Callamard replied that the world has no choice but to try to control these technologies. “We have to reign it in, in the same way that we have tried and sometimes succeeded in reigning in some of the weapons thought to be unlawful or illegal,” she said.

“We have here an example of the richest man on Earth with unlimited resources, and yet it took him several months to realize his phone was hacked, and it took three months by top notch experts to uncover the source of the hacking. So this technology is a danger to all. It’s a danger to national security and democratic processes in the United States.”

Her point was underlined in a op-ed earlier this week that asserts that a focus on bans or moratoriums of facial recognition misses the larger point that digital data brokerage firms operate with virtually no regulation today, and that real change requires more comprehensive privacy regulation. Pressing pause can buy time, but it doesn’t resolve the root problem.

It’s ironic to think that Jeff Bezos, a man whose company sells facial recognition in secret to governments, is the chief victim of an alleged surveillance crime committed by the head of a government that is so severe that it prompts the United Nations to say we must take immediate action.

To Callamard’s point, technology that threatens or challenges the principles underpinning democratic societies shouldn’t be allowed to spread without scrutiny.

Scholars like Shoshana Zuboff and Ruha Benjamin say surveillance capitalism is designed to disregard a person’s agency. If private companies can rush to create markets and make surveillance feel inevitable, the idea goes, people might just give up and forfeit their power. Passionate people studying the proliferation of facial recognition, like those who have testified before Congress in the past year, also say it’s often used by people in power to track, monitor, or extract value from those without power.

In response to government analysis that proves bias and fear that oppressive surveillance can be used to control people, lawmakers in about a dozen U.S. states are currently considering some form of facial recognition regulation. A Congressional committee may soon propose legislation regulating the use of facial recognition software by government, law enforcement, or the private sector. Restrictions suggested in hearings last week include the prohibition of use at protests or political rallies so as not to stifle 1st amendment rights and a requirement that law enforcement be required to disclose when the tech was used to arrest or convict a suspect.

If federal policy like the kind being considered becomes law, facial recognition regulation could protect individual rights without the need for a moratorium. Standards for how it can be used, tests like the kind NIST performs and Microsoft endorses that allow third-party vendors to verify performance results, and limitations on AI’s use in hiring practices and public places need to be defined and enforced.

Among debates about how to regulate facial recognition, you almost always find the claim that regulation will stifle innovation, but curbing the use of the technology in public settings does not necessarily stifle its innovation. That’s a straw man argument. Use of the technology in lab or limited settings can lead to improvements.

People and their elected representatives can choose to delay a technology, but that’s not to say there won’t be consequences: What would happen if a market for facial recognition is allowed to grow only in China or other more permissive parts of the world?

But the question also needs to be asked: How could business and society change for the worse if no limits are placed on the use of facial recognition?

Some facial recognition systems have shown progress in performance, but analysis by the National Institute of Standards and Technology (NIST) shows inequity persists. At what point is a technology known to work best on middle-aged white men, and worse for virtually everyone else in society, considered a civil rights issue?

Microsoft may oppose a moratorium today, but back in summer 2018 it was the first of the tech giants to call for facial recognition software regulation.

A foundation like the kind Microsoft supported then and now — built on consent, testing by third-parties, and human review — seems like part of what’s needed to avoid a “race to the bottom.”

Recent statements by Smith and Pichai about a need to use technology for it to get better seem to suggest that society has to adapt to technology instead of the other way around. But there’s a lot left unanswered or unregulated about how facial recognition or surveillance technology can be used in society today.” Extraction of people’s data can fuel highly profitable predictive machines. The idea of a moratorium must remain on the table, because people need to be shielded from surveillance and the potential infringement of their rights, and moratoriums seem prudent, allowing time for regulation, standards, and tests to measure results.

Action is necessary even if it seems too late to stop the spread of private surveillance, because things are moving really fast in facial recognition right now, both for lawmakers anxious to take action and for businesses interested in the deployment or sale of the technology.

Last week, we learned that nearly a dozen states across the country want to regulate facial recognition, and learned more about efforts underway in Congress and the European Union. In the past day or two alone, we learned that the NYPD allegedly used the deeply invasive Clearview app, and that police in London are rolling out real-time facial recognition to track suspected criminals across closed-circuit cameras. We also heard both Microsoft CEO Satya Nadella and Alphabet’s Sundar Pichai speak in favor of global frameworks for facial recognition and AI, respectively.

Action is also necessary soon, because there’s a lot of pressure on measurements of AI progress these days.

A Brookings Institute fellow went so far as to say the nation that leads in AI by 2030 will rule the world for the rest of the century. China plans to be that world leader by 2030, and tech experts advising Congress and the Pentagon call AI supremacy essential to the U.S. economy and military.

But if in the pursuit of innovation we lose the ability to shield citizens from private surveillance, we’ll lose a lot more than ground in the so-called AI arms race.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!