AI & RoboticsNews

Google’s new AI ethics lead calls for more ‘diplomatic’ conversation

Following months of inner conflict and opposition from Congress and thousands of Google employees, Google today announced that it will reorganize its AI ethics operations and place them in the hands of VP Marian Croak, who will lead a new responsible AI research and engineering center for expertise.

A blog and six-minute video interview with Croak that Google released today announcing the news make no mention of former Ethical AI team co-lead Timnit Gebru, whom Google fired abruptly in late 2020, or Ethical AI lead Margaret “Meg” Mitchell, who a Google spokesperson told VentureBeat was placed under internal investigation last month.

The release also makes no mention of steps taken to address a need to “rebuild trust” called for by members of the Ethical AI team at Google. Multiple members of the Ethical AI team said they found out about the change in leadership from a report published late Wednesday evening by Bloomberg.

“Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team,” Gebru told VentureBeat.

In the video, Croak discusses self-driving cars and techniques for diagnosis of diseases as potential areas of focus in the future, but made no mention of large language models. A recent piece of AI research citing a cross spectrum of experts concluded that companies like Google and OpenAI only have a matter of months to set standards about how to address the negative societal impact of large language models.

In December, Gebru was fired after she sent an email to colleagues advising them to no longer participate in diversity data collecting efforts. A paper she was working on at the time criticized large language models, like the kind Google is known for producing, for harming marginalized communities and tricking people into believing models trained with massive corpora of text data represent genuine progress in language understanding.

In the weeks following her firing, members of the Ethical AI team also called for the reinstatement of Gebru in her previous role. More than 2,000 Googlers and thousands of other supporters signed a letter in support of Gebru and in opposition to what the letter calls “unprecedented research censorship.” Members of Congress who have proposed legislation to regulate algorithms also raised a number of questions about the Gebru episode in a letter to Google CEO Sundar Pichai. Earlier this month, news emerged that two software engineers resigned in protest over Google’s treatment of Black women like Gebru and former recruiter April Curley.

In today’s video and blog post about the change at Google, Croak said that people need to understand that the fields of responsible AI and ethics are new, and called for a more conciliatory tone of conversation about the ways AI can harm people. Google created its AI ethics principles in 2019, shortly after thousands of employees opposed participation in the U.S. military’s Project Maven.

“So there’s a lot of dissension, there’s a lot of conflict in terms of trying to standardize a normative definition of these principles and whose definition of fairness and safety are we going to use, and so there’s quite a lot of conflict right now in the field, and it can be polarizing at times, and what I’d like to do is just have people have a conversation in a more diplomatic way perhaps so we can truly advance this field,” Croak said.

Croak said the new center will work internally to assess AI systems that are being deployed or designed, then “partner with our colleagues and PAs and mitigate potential harms.”

The Gebru episode at Google led some AI researchers to pledge that they wouldn’t review papers from Google Research until change was made. Shortly after Google fired Gebru, Reuters reported that the company asked its researchers to strike a positive tone when addressing issues referred to as sensitive topics.

Croak’s appointment to the position spells the latest controversial development at the top of AI ethics ranks at Google Research and DeepMind, which Google acquired in 2014. Last month, a Wall Street Journal report found that DeepMind cofounder Mustafa Suleyman was removed from management duties, before leaving the company in 2019, due to his bullying of coworkers. Suleyman also served as a head of ethics at DeepMind, where he discussed issues like climate change and health care. Months later, Google hired Suleyman for work in an advisory role on matters of policy and regulation.

How Google conducts itself when it comes to using AI responsibly and defending against forms of algorithmic oppression is immensely important because AI adoption is growing in business and society, but also because Google is a world leader in producing published AI research. A study published last fall found that Big Tech companies treat AI ethics funding in a way that’s analogous to the way Big Tobacco companies funded health research decades ago.

VentureBeat has reached out to Google to inquire about steps to reform internal practices, issues raised by Google employees, and a number of other questions. This story will be updated if we hear back.

More to come

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!