AI & RoboticsNews

Google AI ethics co-lead Timnit Gebru says she was fired over an email

Timnit Gebru, one of the best-known AI researchers today and co-lead of an AI ethics team at Google, no longer works at the company. She was featured in Google promotional material as recently as May. According to Gebru, she was fired Wednesday for sending an email to “non-management employees that is inconsistent with the expectations of a Google manager.” She said Google AI employees who report to her were emailed and told that she accepted her resignation when she did not offer her resignation. VentureBeat reached out to Gebru and Google AI chief Jeff Dean for comment. This story will be updated if we hear back.

According to Casey Newton’s Platformer, who reportedly obtained a copy, Gebru sent the email in question to the Google Brain Women and Allies listserv. In it, Gebru expresses frustration with the lack of progress in hiring women at Google and lack of accountability for failure to make progress. She also said was told not to publish a piece of research and advised employees to no longer fill out diversity paperwork because it didn’t matter. No mention is made of resignation.

“There is no way more documents or more conversations will achieve anything. We just had a Black research all hands with such an emotional show of exasperation. Do you know what happened since? Silencing in the most fundamental way possible,” the email reads.

When asked by VentureBeat for comment, a Google spokesperson provided a link to the Platformer article with a copy of an email sent Thursday by Google AI chief Jeff Dean to company research staff. In it, Dean said a research paper written by Gebru and other researchers was submitted for publication at a conference before completing a review process and addressing feedback. In response, Dean said he received an email from Gebru.

“Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” he said. Given Timnit’s role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we’re doing. I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it.”

Numerous researchers — including Gebru — advocate for diverse hiring practices as a way to address algorithmic bias and what AI Now Institute director Kate Crawford refers to as “AI’s white guy problem.” A 2016 AI Now Institute report found that 10% of AI researchers at Google were women. According to Google’s 2020 diversity report, roughly 1 in 3 Google employees are women, whereas 3.7% are African American, 5.9% are Latinx, and 0.8% are Native American.

Gebru criticized diversity efforts at Google in May following NBC News reporting that the company reduced diversity initiatives to avoid backlash from conservatives. Following that news, a congressional subcommittee questioned the company’s lack of progress toward diversity goals and whether employees working in AI receive any bias training. A Google spokesperson told VentureBeat that any suggestion that Google reduced its diversity efforts was “categorically false.” The spokesperson did not reply when asked to respond to questions posed by the congressional subcommittee. Following the death of George Floyd and historic protests, roughly a month later, Google joined other Big Tech companies and made new commitments to diverse hiring practices.

Gebru left Google the same day that the National Labor Review Board (NLRB) filed complaints against Google that found that the company spied on and illegally fired two employees involved in labor organizing. In a tweet published days before leaving Google, Gebru questioned whether there’s regulation that protects people from divulging whistleblower law protection for members of the AI ethics community.

Tawana Petty is a data justice and privacy advocate in Detroit who this week was named national organizing director of Data for Black Lives. This morning she gave a talk about the legacy of surveillance of Black communities with tech like facial recognition and the toxicity of white supremacy on people’s lives. She dedicated her keynote talk at the 100 Brilliant Women in AI Ethics conference to Gebru.

“She was terminated for what we all aspire to do and be,” Petty said.

Mia Shah-Dand, who organized the conference and previously worked at Google, called Gebru’s dismissal a reflection of toxic culture in tech and a sign that women, particularly Black women, need support.

Gebru is known for some of the most influential work in algorithmic fairness research and combating algorithmic bias with the potential to automate oppression. Gebru is a cofounder of the Fairness, Accountability, and Transparency (FAccT) conference and Black in AI, a group that hosts community gatherings and mentors young people of African descent. Black in AI holds its annual workshop Monday at NeurIPS, the largest AI research conference in the world.

Before coming to Google, Gebru joined Algorithmic Justice League founder Joy Buolamwini and created the Gender Shades project to assess the performance of facial recognition systems from major vendors like IBM and Microsoft. A peer-reviewed paper spawned from Buolamwini’s 2017 MIT thesis concluded that facial recognition tends to work best on white men and worst for women with a dark skin tone. That research and subsequent work by Buolamwini and Deborah Raji in 2019 have been highly influential among lawmakers deciding how to regulate the technology and in people’s attitudes about the threat posed by algorithmic bias.

While working at Microsoft Research, she was lead author of “Datasheets for Datasets,” a paper that recommends including a set of standard information with datasets in order to provide data scientists with context before they decide to use that data for training an AI model. “Datasheets for Datasets” would later act as motivation for the creation of model cards.

As a Google employee, Gebru joined Margaret Mitchell, Raji, and others in writing a paper in 2019 about model cards, a framework for providing benchmark performance information about a model for machine learning practitioners to evaluate before using an AI model. Google Cloud began providing model cards for some of its AI last year, and this summer the company introduced the Model Card Toolkit for developers to make their own model cards.

This summer, Gebru and her former colleague Emily Denton led a tutorial about fairness and ethics in computer vision at the Computer Vision and Pattern Recognition (CVPR) that organizers called “required viewing for us all.” Shortly after that she got into a public spat with Facebook director of AI research Yann LeCun about AI bias, which turned out to be a teachable moment for LeCun, who won the Turing Award in 2019 for his work on deep learning.

Member of the AI community and others have referred to Gebru as someone actively trying to save the world. Earlier this year, Gebru was included in Good Night Stories for Rebel Girls: 100 Immigrant Women Who Changed the World, a book that was released in October.

Updated 9:05 a.m. with thoughts from Tawana Petty and Mira Shah-Dand, 11:29 a.m. to include a link to and summarization of an email by Timnit Gebru, and at 1:45 p.m. with a link and quote of an email by Google AI chief Jeff Dean.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!