AI & RoboticsNews

AI ethics champion Margaret Mitchell on self-regulation and ‘foresight’

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


Ethics and artificial intelligence have become increasingly intertwined due to the pervasiveness of AI. But researchers, creators, corporations, and governments still face major challenges if they hope to address some of the more pressing concerns around AI’s impact on society.

Much of this comes down to foresight — being able to adequately predict what problems a new AI product, feature, or technology could create down the line, rather than focusing purely on short-term benefits.

“If you do believe in foresight, then it should become part of what you do before you make the product,” AI researcher and former Googler Margaret Mitchell said during a fireside chat at VentureBeat’s Transform 2021 event today. “I think right now, AI ethics is at a stage where it’s seen as the last thing you do, like a policing force or a block to launch. But if you’re taking it seriously, then it needs to be hand in hand with development as a tech-positive thing to do.”

Google fired Margaret Mitchell from her role as Ethical AI lead back in February, shortly after firing her co-lead Timnit Gebru. Accusations of research censorship and “retaliatory firings” abounded in the weeks that followed. While Mitchell said she was initially devastated about losing her job, she soon realized there was significant demand for her skills as an AI ethics researcher.

“From the time Timnit was fired until I was fired was general devastation,” Mitchell said. “And then upon being fired, it was not better. But it became clear to me that there was a very real interest in AI ethics. It made me realize that there were regulators who really needed help with the technical details of AI, that I could for the first time actually help and work with, that there were tons of companies that really wanted to start operationalizing details of AI ethics and bias and fairness and didn’t really know how to do it. It became a bit of an eye-opener, that there are a lot of opportunities right now.”

Self-regulation

Google, which releases all manner of AI-powered tools and features — from facial recognition for photo organization to smart replies for YouTube comments — has had to address growing societal concerns around AI. Although it has been embroiled in ethics controversies of late, in 2018 the company unveiled seven principles to guide its approach to AI development. And with more proposed AI regulations emerging to address the perceived threats posed by intelligent machines, it makes sense for big companies to proactively embed ethics into their AI product development ethos before external forces interfere. Just yesterday, the U.S. House Judiciary Committee held a hearing on facial recognition technology that included a look at the proposed Biometric Technology Moratorium Act, which seeks to ban government use of biometric technology in law enforcement.

The questions center around government restrictions versus corporate self-regulation.

“I came to a point in my career at Google where I realized that as we moved closer to dealing with regulation externally, we were really well-positioned to do self-regulation internally and really meet external regulation with nitty-gritty details of what it actually meant to do these higher-level goals that regulation put forward,” Mitchell explained. “And that’s in the benefit of the company because you don’t want regulation to be disconnected from technology in a way that [rules] stymie innovation or they end up creating the opposite of what they’re trying to get at. So it really seemed like a unique opportunity to work within the company, figuring out the basics of what it meant to do something like self-regulation.”

An AI ethics practitioner might find it easier to influence product design if they are deeply embedded inside the company. But there are clear tensions at play if — for example — an employee’s recommendations are seen as a threat to the company’s bottom line.

“I came into Google really wanting to work on hard problems, and this is definitely a hard problem, in part because it can push against the idea of making profit, for example,” Mitchell said. “And so that creates a natural tension, [but] at the same time it’s possible to do really meaningful research on AI ethics when you can be there in the company, understanding the ins and outs of how products are created. If you ever want to create some sort of auditing procedure, then really understanding — from end to end — how machine learning systems are built is really important.”

But as Mitchell and Gebru’s respective dismissals demonstrate, individuals working to reduce bias and help AI system designers embed ethics into their creations often face an uphill battle.

“I think [the firing] points to a lot about diversity and inclusion actually,” Mitchell said. “I try and tell people that if you want to include, don’t exclude. And here you have a very prime example of exclusion, and a certain amount of immaturity I would say, that speaks to a culture that really isn’t embracing the ideas of people who are not as similar to everyone else. I think it’s always an issue that one is concerned about if you have marginalized characteristics, if you’ve experienced the experiences of women in tech. But I think that it really came to bite me when I was fired, just how much of an outsider I was treated as, and I don’t think it would have been like that if I was part of the game in the same way that a lot of my male colleagues were.”

Idealistic

Mitchell argues that many tech companies and technologies have been built for an idealized future, or the idea that being able to do something would be “very, very cool.” But this thinking, she said, is usually devoid of the social context of how people, governments, or other companies actually use or misuse the technology. Thus, companies tend to hire people not so much because of their personal experiences or views on how technology might impact the world in 10 years, but based on short-term business goals.

“It tends to be a very sort of pie in the sky positive view that runs into a kind of myopia about the realities of how things evolve over time,” Mitchell said.

This sentiment was echoed in a recent study that found few major research papers properly addressed the ways AI could negatively impact the world. The findings, which were published by researchers from Stanford University; the University of California, Berkeley; the University of Washington; and University College Dublin & Lero, showed dominant values were “operationalized in ways that centralize power, disproportionally benefiting corporations while neglecting society’s least advantaged,” as VentureBeat wrote at the time.

Mitchell added that hiring a more diverse AI research workforce can help counter this. “I definitely found that people with marginalized characteristics — so people who had experienced discrimination — had a deeper understanding of the kinds of things that could happen to people negatively and the way the world works in a way that was a bit less rosy,” she said. “And that really helps to inform the longer-term view of what would happen over time.”

Communicate

One of the challenges of working in any big company is that of two-way communication — being able to broadcast orders down the chain is all very well, but how do you facilitate feedback, something that is integral to ethical AI research?

“A lot of companies are very hierarchical, and technology is no exception, where communication can flow top-down, but [it’s harder] communicating bottom-up to help the company understand that if they release this [new feature] they’re going to get in trouble for that,” Mitchell said. “The lack of two-way communication, I think largely due to the hierarchical structure, can really hinder moving tech forward in a way that is well-informed by foresight and social issues.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Paul Sawers
Source: Venturebeat

Related posts
DefenseNews

After Army canceled helo program, industry had to pivot

DefenseNews

Here’s when the US Army will pick next long-range spy plane

DefenseNews

Raytheon picks Spain’s Sener to make Patriot interceptor parts

Cleantech & EV'sNews

Gogoro announces major partnership to help accelerate global expansion

Sign up for our Newsletter and
stay informed!