AI & RoboticsNews

AI Weekly: The argument in favor of regulation in 2021

2020 was an eventful year, not least of which because of a pandemic that shows little sign of abating. The AI research community experienced its own tribulations, capped off by Google’s firing of ethicist Timnit Gebru and a spat over AI ethics and “cancel culture” involving retired University of Washington professor Pedro Domingos. Facebook chief AI scientist Yann LeCun quit (and rejoined) Twitter following an acrimonious debate about the origin of bias in AI models. And companies like Clearview AI, Palantir, and license-plate-scanning Rekor expanded their reaches to curry favor — and business — with law enforcement agencies. All the while, AI failed to avoid disadvantaging (and in some cases actively disadvantaged) certain groups, whether charged with moderating content, predicting U.K. students’ grades, or cropping images in Twitter timelines.

With 2020 in the rearview mirror and New Year’s resolutions on the mind, I believe the AI community would do well to consider the proposal Zachary Lipton, an assistant professor at Carnegie Mellon University, put forth earlier this year. He advocated for a one-year moratorium on studies for the entire industry to encourage “thinking” as opposed to “sprinting/hustling/spamming” toward deadlines. “Greater rigor in exposition, science, and theory are essential for both scientific progress and fostering a productive discourse with the broader public,” Lipton cowrote in a meta analysis with the University of California, Berkeley’s Jacob Steinhardt. “Moreover, as practitioners apply [machine learning] in critical domains such as health, law, and autonomous driving, a calibrated awareness of the abilities and limits of [machine learning] systems will help us to deploy [machine learning] responsibly.”

Lipton’s and Steinhardt’s advice went unheeded by researchers at the Massachusetts Institute of Technology, California Institute of Technology, and Amazon Web Services, who in a paper published in July suggested a method for measuring the algorithmic bias of facial analysis algorithms that one critic described as “high-tech blackface.” Another study this year, coauthored by scientists affiliated with Harvard and Autodesk, sought to create a “racially balanced” database capturing a subset of LGBTQ people but conceived of gender in a way that’s not only contradictory but dangerous, according to University of Washington AI researcher Os Keyes. More alarming was the announcement in August of a study on Indiana parolees that seeks to predict recidivism with AI, even in the face of evidence that recidivism prediction algorithms reinforce racial and gender biases.

In conversations with VentureBeat’s Khari Johnson last year, Nvidia machine learning research director Anima Anandkumar, Facebook’s Soumith Chintala (who created the AI framework PyTorch), and IBM Research director Dario Gil predicted that finding ways for AI to better reflect the kind of society people want to build would become a front-and-center issue in 2020. They also expected that the AI community would tackle head-on issues of representation, fairness, and data integrity, as well as ensuring datasets used to train models account for different groups of people.

This hasn’t come to pass. As researchers criticize Google over its opaque (and censorial) research practices, firms commercialize models whose training contributes to carbon emissions, and problematic language systems make their way into production, 2020 was a year of regression rather than progression in many ways for the AI community. But at the regulatory level, there’s hope for righting the ship.

In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco in California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including facial images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts have advanced a suspension of government use of any biometric surveillance system within the commonwealth. More recently, Portland, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies.

In a related development, the European Commission earlier this year proposed the Digital Services Act, which if passed would oblige companies to reveal information about how their algorithms work. Platforms with over 45 million users in the European Union would have to offer at least one content recommendation option “not based on profiling,” and fines for failing to comply with the rules could reach up to 6% of a company’s annual revenue.

These examples aren’t to suggest the whole of the AI research community disregards ethics and thus requires a reining in. For instance, this year will mark the fourth annual Association for Computing Machinery Conference on Fairness, Accountability, and Transparency, which will feature among other works Gebru’s research on the impacts of large language models. Rather, recent history has shown that AI productization, guided by regulation such as moratoriums and transparency legislation, fosters more equitable applications of AI than might have otherwise been pursued. In something of a case in point, facing pressure from lawmakers, Amazon, IBM, and Microsoft agreed to halt or end sale of facial recognition technology to police.

The interests of shareholders — and even academia — will often be at odds with the welfare of the disenfranchised. But the rise of legal remedies to curb abuses and misuses of AI shows a weariness toward the status quo. In 2021, it’s not unreasonable to expect the trend will continue and that the AI community will be forced to (or will preemptively) fall in line. For all the failures 2020 held, with any luck, it laid the groundwork for shifts in thinking with respect to AI and its effects.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!