AI & RoboticsNews

DeepMind scientist calls for ethical AI as Google faces ongoing backlash

Elevate your enterprise data technology and strategy at Transform 2021.


Raia Hadsell, a research scientist at Google DeepMind, believes “responsible AI is a job for all.” That was her thesis during a talk today at the virtual Lesbians Who Tech Pride Summit, where she dove into the issues currently plaguing the field and the actions she feels are required to ensure AI is ethically developed and deployed.

“AI is going to change our world in the years to come. But because it is such a powerful technology, we have to be aware of the inherent risks that will come with those benefits, especially those that can lead to bias, harm, or widening social inequity,” she said. “I hope we can come together as a community to build AI responsibly.”

AI approaches are algorithmic and general, which means they’re inherently multi-use. On one side there are promises of curing diseases and unlocking a golden future, and on the other, unethical approaches and dangerous use cases that are already causing harm. With a lot on the line, how to approach these technologies is rarely clear.

Hadsell emphasized that while regulators, lawyers, ethicists, and philosophers play a critical role, she’s particularly interested in what researchers and scientists can actively do to build responsible AI. She also detailed some of the resistance she’s met within the research community and the changes she’s helped bring to life thus far.

Data, algorithms, and applications

The issues plaguing AI are well-known, but Hadsell gave an overview about their roots in data, algorithms, and applications.

Data, for one, is the lifeblood of modern AI, which is mostly based on machine learning. Hadsell said the ability to use these datasets built on millions or billions of human data points is “truly a feat of engineering,” but one with pitfalls. Societal bias and inequity is often encoded in data, and then exacerbated by an AI model that’s trained on a data set. There are also issues of privacy and consent, which she said “have too often been compromised by the irresponsible enthusiasm of a young PhD student.”

Hadsell also brought up the issue of deepfakes, and that the same algorithm used to create them is also used for weather prediction. “A lot of the AI research community works on fundamental research, and that can appear to be a world apart from an actual real-world deployment of that research,” said Hadsell, whose own research currently focuses on solving the fundamental challenges of robotics and other control systems.

Changing the culture

During the event, Hadsell recalled talking to a colleague who had written up a paper about their new algorithm. When asked to discuss the possible future impacts of the research, the colleague replied that they “can’t speculate about the future” because they’re a scientist, not an ethicist.

“Now wait a minute, your paper claims that your algorithm could cure cancer, mitigate climate change, and usher in a new age of peace and prosperity. Maybe I’m exaggerating a bit, but I think that that proves you can speculate about the future,” Hadsell said.

This interaction wasn’t a one-off. Hadsell said many researchers just don’t want to discuss negative impacts, and she didn’t mince words, adding that they “tend to reject responsibility and accountability for the broader impacts of AI on society.” The solution, she believes, is to change the research culture to ensure checks and balances.

A reckoning at NeurIPS

NeurIPS is the largest and most prestigious AI conference in the world, yet despite exponential growth in the number of attendees and papers submitted over the past decade, there were no ethical guidelines provided to authors prior to 2020. What’s more, papers were evaluated strictly on technical merit without consideration for ethical questions.

So when Hadsell was invited to be one of four program chairs tasked with designing the review process for the 10,000 papers expected last year, she initiated two changes. One was recruiting a pool of ethical advisors to give informed feedback on papers deemed to be controversial. The other was requiring every single author to submit a broader impact statement with their work, which would need to discuss the potential positive and negative future impacts, as well as any possible mitigations.

This idea of an impact statement isn’t new — it’s actually a common requirement in other scientific fields like medicine and biology — but this change didn’t go over well with everyone. Hadsell said she “didn’t make a lot of friends” and there were some tears, but later some authors reached out to say it was a valuable experience and even inspired new directions for research. She added there’s also been an uptick in conferences requiring such statements.

“Adding the broader impact statement to a few thousand papers is not quite enough to change the culture towards responsible AI. It’s only a start,” Hadsell said. She also noted that there’s a danger these reviews will become “tick-box formalities” rather than an honest examination of the risks and benefits of each new technological innovation. “So we need to keep the integrity and build onwards, from broader impact statements to responsible AI.”

Walking the walk

Before Hadsell’s talk even began, there was an elephant in the room. Google, which has owned the prestigious DeepMind lab since 2014, doesn’t have the best track record with ethical AI. The issue has been especially front and center since December when Google fired Timnit Gebru, one of the best-known AI researchers and co-lead of its AI ethics team, in what thousands of the company’s employees called a “retaliatory firing.” Gebru says she was fired over email after refusing to rescind research about the risks of deploying large language models. Margaret Mitchell, the other co-lead on the ethics team, was fired as well.

Attendees dropped questions on the topic into the chat as soon as Hadsell’s talk began. “How can you build a culture of accountability and responsibility if voices speaking on the topics of AI ethics and [the] negative impact of Google’s research into AI algorithms (like Timnit Gebru) are rejected?” asked one attendee. Another acknowledged that Hadsell works in a different part of the company, but still asked for her thoughts on the firing.

Hadsell said she didn’t have any additional information or insights other than what’s already been made public. She added, “What I will say is that at DeepMind, we are, you know, really concerned with making sure the voices we have in the community and internally, and the publications that we write and put out, express our diversity and all of the different voices at DeepMind. I believe it’s important for everyone to have the chance to speak about the ethics of AI and about risks, regardless of Google’s algorithm.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Sage Lazzaro
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!