AI & RoboticsNews

AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says that there’s a growing sense among regulation advocates that a biometric surveillance state is not inevitable.

The release of AI Now’s report couldn’t be more timely. As the pandemic drags on into the fall, businesses, government agencies, and schools are desperate for solutions that ensure safety. From tracking body temperatures at points of entry to issuing health wearables to employing surveillance drones and facial recognition systems, there’s never been a greater impetus for balancing the collection of biometric data with rights and freedoms. Meanwhile, there’s a growing number of companies selling what seem to be rather benign products and services that involve biometrics, but that could nonetheless become problematic or even abusive.

The trick of surveillance capitalism is that it’s designed to feel inevitable to anyone who would deign to push back. That’s an easy illusion to pull off right now, at a time when the reach of COVID-19 continues unabated. People are scared and will reach for a solution to an overwhelming problem, even if it means acquiescing to a different one.

When it comes to biometric data collection and surveillance, there’s tension and often a lack of clarity around what’s ethical, what’s safe, what’s legal — and what laws and regulations are still needed. The AI Now report methodically lays out all of those challenges, explains why they’re important, and advocates for solutions. Then it gives shape and substance to them through eight case studies that examine biometric surveillance in schools, police use of facial recognition technologies in the U.S. and U.K., national efforts to centralize biometric information in Australia and India, and more.

There’s a certain responsibility incumbent on everyone — not just politicians, entrepreneurs, and technologists, but all citizens —  to acquire a working understanding of the sweep of issues around biometrics, AI technologies, and surveillance. This report serves as a reference for the novel questions that continue to arise. It would be an injustice to the 111-page document and its authors to summarize the whole of the report in a few hundreds words, but it includes several broad themes.

The laws and regulations about biometrics as they pertain to data, rights, and surveillance are lagging behind the development and implementation of the various AI technologies that monetize them or use them for government tracking. This is why companies like Clearview AI proliferate — what they do is offensive to many, and may be unethical, but with some exceptions it’s not illegal.

Even the very definition of what biometric data is remains unsettled. There’s a big push to pause these systems while we create new laws and reform or update others — or ban the systems entirely because some things shouldn’t exist and are perpetually dangerous even with guardrails.

There are practical considerations that can shape how average citizens, private companies, and governments understand the data-powered systems that involve biometrics. For example, the concept of proportionality is that “any infringement of privacy or data-protection rights be necessary and strike the appropriate balance between the means used and the intended objective,” says the report, and that a “right to privacy is balanced against a competing right or public interest.”

In other words, the proportionality principle raises the question of whether a given situation warrants the collection of biometric data at all. Another layer of scrutiny to apply to these systems is purpose limitation, or “function creep” — essentially making sure data use doesn’t extend beyond the original intent.

One example the report gives is the use of facial recognition in Swedish schools. They were using it to track student attendance. Eventually the Swedish Data Protection Authority banned it on the grounds that facial recognition was too onerous for the task — it was disproportionate. And surely there were concerns about function creep; such a system captures rich data on a lot of children and teachers. What else might that data be used for, and by whom?

This is where rhetoric around safety and security becomes powerful. In the Swedish school example, it’s easy to see how that use of facial recognition doesn’t hold up to proportionality. But when the rhetoric is about safety and security, it’s harder to push back. If the purpose of the system is not taking attendance, but rather scanning for weapons or looking for people who aren’t supposed to be on campus, that’s a very different conversation.

The same holds true of the need to get people back to work safely and to keep returning students and faculty on college campuses safe from the spread of COVID-19. People are amenable to more invasive and extensive biometric surveillance if it means maintaining their livelihood with less danger of becoming a pandemic statistic.

It’s tempting to default to a simplistic position of , but under scrutiny and in real-life situations, that logic falls apart. First of all: More safety for whom? If refugees at a border have to submit a full spate of biometric data, or civil rights advocates are subjected to facial recognition while exercising their right to protest, is that keeping anyone safe? And even if there is some need for safety in those situations, the downsides can be dangerous and damaging, creating a chilling effect. People fleeing for their lives may balk at those conditions of asylum. Protestors may be afraid to exercise their right to protest, which hurts democracy itself. Or schoolkids could suffer under the constant psychological burden of being reminded that their school is a place full of potential danger, which hampers mental well-being and the ability to learn.

A related problem is that regulation may happen only after these systems have been deployed, as the report illustrates using the case of India’s controversial Aadhaar biometric identity project. The report described it as “a centralized database that would store biometric information (fingerprints, iris scans, and photographs) for every individual resident in India, indexed alongside their demographic information and a unique twelve-digit ‘Aadhaar’ number.” The program ran for years without proper legal guardrails. In the end, instead of using new regulations to roll back the system’s flaws or dangers, lawmakers simply essentially fashioned the law to fit what had already been done, thereby encoding the old problems into law.

And then there’s the issue of efficacy, or how well a given measure works and whether it’s helpful at all. You could fill entire tomes with research on AI bias and examples of how, when, and where those biases cause technological failures and result in abuse of the people upon whom the tools are used. Even when models are benchmarked, the report notes, those scores may not reflect how well those models perform in real-world applications. Fixing bias problems in AI, at multiple levels of data processing, product design, and deployment, is one of the most important and urgent challenges the field faces today.

One of the measures that can abate the errors that AI coughs up is keeping a human in the loop. In the case of biometric scanning like facial recognition, systems are meant to essentially provide leads after officers run images against a database, which humans can then chase down. But these systems often suffer from automation bias, which is when people rely too much on the machine and overestimate its credibility. That defeats the purpose of having a human in the loop in the first place and can lead to horrors like false arrests, or worse.

There’s a moral aspect to considering efficacy, too. For example, there are many AI companies that purport to be able to determine a person’s emotions or mental state by using computer vision to examine their gait or their face. Though it’s debatable, some people believe that the very question these tools claim to answer is immoral or simply impossible to do accurately. Taken to the extreme, this results in absurd research that’s essentially AI phrenology.

And finally, none of the above matters without accountability and transparency. When private companies can collect data without anyone knowing or consenting, when contracts are signed in secret, when proprietary concerns take precedent over demands for auditing, when laws and regulations between states and countries are inconsistent, and when impact assessments are optional, these crucial issues and questions go unanswered. And that’s not acceptable.

The pandemic has served to show the cracks in our various governmental and social systems and has also accelerated both the simmering problems therein and the urgency of solving them. As we go back to work and school, the biometrics issue is front and center. We’re being asked to trust biometric surveillance systems, the people who made them, and the people who are profiting from them, all without sufficient answers or regulations in place. It’s a dangerous tradeoff. But you can at least understand the issues at hand, thanks to the AI Now Institute’s latest report.


Author: Seth Colaner
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!