AI & RoboticsNews

If you’re worried about the end of privacy, don’t waste your outrage on Clearview AI

It’s easy to feel outrage at Clearview AI for creating facial recognition trained with 3 billion images scraped without permission from sites like Google, Facebook, and LinkedIn, but the company should be only one of the targets of your ire. Pervasive surveillance capitalism is designed to make you feel helpless, but shaping AI regulation is part of citizenship in the 21st century, and you’ve got a lot of options.

On Tuesday, Senator Ed Markey (D – MA) sent a letter to Clearview AI demanding answers about a data breach involving billions of photos scraped from the web without permission and the sale of facial recognition to governments with poor human rights records like Saudi Arabia. That would be scandalous news for most companies, but not Clearview AI. For context, here’s what the past week looked like for the company:

News emerged Monday that Clearview AI is reportedly working on a security camera and augmented reality glasses equipped with facial recognition.

Following a data breach reported last Wednesday, we learned that Clearview AI’s client list includes more than 2,900 organizations, including governments and businesses from around the world. In all, it comprises businesses from 27 countries, including Walmart, Macy’s, and Best Buy, and hundreds of law enforcement agencies, from the FBI to ICE, Interpol, and the Department of Justice. Tech giants like Google and Facebook sent Clearview AI cease-and-desist letters last Tuesday.

Back in January, the New York Times’ Kashmir Hill, who first brought Clearview AI to people’s attention, reported that the company was working with more than 600 law enforcement agencies and a handful of private companies. But reporting last week brought the Clearview AI client list into sharper focus, along with the number of searches by each client. The story also revealed that a total of 500,000 searches had been made.

A breakdown of an APK version of the Clearview app found by Gizmodo on a public AWS server the same day signals the potential addition of a voice search option in the future.

Clearview AI CEO Hoan Ton-That previously told multiple news outlets the company focuses on law enforcement clients in North America, but an internal document obtained by BuzzFeed News shows government, law enforcement, and business clients around the world.

Everything we’ve learned about Clearview in the past week gives credence to the New York Times’ claim in January that the company might end privacy, and to VentureBeat news editor Emil Protalinski’s assessment that Clearview is on a “short slippery slope.”

If what Clearview AI did and continues to do makes you angry, then you’re probably with the majority of people who lack understanding of data privacy law and feel you have little to no control over how businesses and governments collect or use your personal data.

If you believe privacy is a right that deserves protection in an increasingly digital and AI-driven world, don’t aim your anger at the Peter Thiel-backed company itself. The way it operates may be insensitive or even horrifying, but save your questions for the businesses and governments working with Clearview AI. People deserve answers to the kinds of questions Senator Markey asks about the extent of the data breach and Clearview’s business practices, but people should also question policy that enables Clearview to exist.

Because Clearview AI doesn’t matter as much as the public’s response to how those in positions of power choose to use Clearview’s technology.

What AI regulation looks like

Clearview AI is not the only company inciting fear and outrage. In the past week or so, everyone from Elon Musk to Pope Francis have called for AI regulation.

In addition to the Clearview AI story, we also learned more recently about NEC, a company that started research into facial recognition in 1989. One of the largest private providers of facial recognition in the world, NEC has more than 1,000 clients in 70 countries, including Delta, Carnival Cruise Line, and public safety officials in 20 U.S. states.

The EU is considering a pan-European facial recognition network, while cities like London, which has the most CCTV cameras of any city outside China, are launching live facial recognition technology that makes it possible to track an individual across a web of closed-circuit cameras.

In a very different set of developments, last Thursday we learned more about how the U.S. Immigration and Customs Enforcement agency (ICE) uses facial recognition software. The Washington Post reported that ICE has been searching a database of immigrant driver’s licenses without obtaining a warrant. This policy may terrorize immigrants and their families, put more people at risk by increasing the number of unlicensed drivers on the road, and deter immigrants from reporting crimes.

In the past month or so, the White House and European Union have attempted to define what AI regulation should look like. Meanwhile, lawmakers in about a dozen states are currently considering facial recognition regulation, Georgetown Law Center for Privacy and Tech said earlier this year.

But defining AI regulation isn’t something tech giants or machine learning practitioners should work out on their own. It’s up to ordinary people to recognize that, as Microsoft CTO Kevin Scott said, understanding AI is part of citizenship in the 21st century, and there are many ways to influence change.

Ways to respond

Clearview AI and tech giants with unprecedented power and resources — like Amazon and Microsoft — want to establish a market for the sale of facial recognition software to governments.

These companies are trading in a surveillance capitalism market with the potential to suppress fundamental rights and exacerbate over-policing and discrimination. This is all the more concerning after NIST’s December 2019 study found nearly 200 facial recognition algorithms currently exhibit bias, with a high likelihood of misidentifying Asian American and African American people.

That’s a lot to take in, and outrage is understandable, but it’s important to not give in to despair. Experts like Shoshana Zuboff and Ruha Benjamin argue that making people feel helpless is the point of surveillance capitalism.

We’re living on the verge of a COVID-19 pandemic, we just saw the largest stock market drop since 2008, and climate change remains an existential threat. But we still have a lot of options when it comes to shaping AI regulation:

If you live in California, under the new Consumer Privacy Protection Act (CCPA), you can send an email to privacy-requests@clearview.ai to request a copy of data the company is collecting about you and ask it to stop. Vice reporter Anna Merlan and colleague Joseph Cox sent such a request to Clearview AI. After supplying the company with a photo for a search about a month ago, last week Merlan received a cache of about a dozen photos of herself that had been published online between 2004 and 2019. Clearview told her the images were scraped from websites, not social media, and agreed to ensure those images no longer appear in Clearview AI search results.

Is the New York Times right? Is Clearview AI going to make it impossible to walk down the street in anonymity? Is it the end of privacy? That’s up to you.


Author: Khari Johnson.
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!

Worth reading...
Uber details VerCD, the AI tech powering its self-driving cars