AI & RoboticsNews

Researchers are starting to refuse to review Google AI papers

Computer scientists in AI are beginning to refuse to review Google AI research until Google changes its stance on former AI ethics co-lead Timnit Gebru. Reviewers who select research for publication at academic conferences work on a voluntary basis to support the scientific community and are typically chosen based on their experience and expertise. According to an analysis of papers published last year and updated this summer, Google is the largest contributor in the world to AI research conferences. Gebru said she was fired last week following an email expressing frustration over a lack of progress on diversity at Google and interference in the process of publishing a paper for a research conference.

Until @Google changes their position about @timnitGebru, I’m listing https://t.co/yPZz7JTbtq as a conflicted domain for paper reviews. If you aren’t going to follow academic norms, I’m not going to peer-review your org’s publications (which we all do for free).

RT if you agree

— Isaac Tamblyn (@itamblyn) December 6, 2020

Google AI chief Jeff Dean claimed that Gebru offered her resignation, while Gebru and the other Google AI ethics co-lead Meg Mitchell said she never offered to resign. Gebru is a vocal champion of Black people and women in machine learning. For her work on facial recognition, computer vision, and ethics, Gebru is one of the best known AI researchers in the world today.

Numerous researchers from Google’s Ethical AI team, Google Brain researchers, and others outraged at how Google treated Gebru have been active in social media using hashtags like #IStandWithTimnit and #BelieveBlackWomen.

I also refuse to review papers from @GoogleAI until it fixes travesty of justice meted upon @timnitGebru #BoycottGoogle @JeffDean #IStandWithTimnit https://t.co/CIldkYfL4f

— Prof. Anima Anandkumar (@AnimaAnandkumar) December 7, 2020

The research paper at the center of the disagreement is titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” The paper details the negative consequences of large language models on marginalized communities and closely examines risks associated with their use, like environmental racism and the potential to perpetuate a wide variety of bias based on gender, race, and other characteristics.

“A methodology that relies on datasets too large to document is therefore inherently risky. While documentation allows for potential accountability, similar to how we can hold authors accountable for their produced text, undocumented training data perpetuates harm without recourse. If the training data is considered too large to document, one cannot try to understand its characteristics in order to mitigate some of these documented issues or even unknown ones,” reads a draft copy of the paper shared with VentureBeat.

Coauthors of that paper include Google AI’s Mitchell and linguist Emily Bender, whose work critical of hype surrounding large language models won an award earlier this year.

Refusals to review Google AI research by leaders in the machine learning community come at the start of Neural Information Processing Systems (NeurIPS), the largest AI research conference in the world, where a paper about GPT-3, the largest language model to date, won the best paper award.

A day after Google pushed out Gebru, Google Walkout for Real Change, the group that organized a 2018 global walkout to demand reform and protest Google’s involvement in things like the Pentagon’s Project Maven, issued a statement. It decries “unprecedented research censorship” and demands Google undertake an assessment in full public view and strengthen commitments made in Google’s research philosophy. Thus far, that letter has been signed by 1,600 Google employees and more than 2,600 supporters in academia, civil society, and industry.

In an email to Google researchers last week that was leaked to Platformer, Dean criticized the Gebru paper for failing to consider relevant research for topics like bias mitigation efforts and said it was submitted with short notice.

A Google Walkout for Real Change letter published this morning refutes numerous statements made by Dean. Googlers said the paper was initially approved in September by a subject matter expert as well as Gebru’s manager Samy Bengio, and was assessed by a consortium of nearly 30 colleagues ahead before becoming part of the Google review process called PubApprove. But last month authors were told by managers to retract the paper or remove their names.

“No written feedback was provided from leadership, the authors were not given an opportunity to communicate about the verbalized concerns to anyone involved, and the authors were not provided with an opportunity to revise the paper in light of the feedback, despite the camera-ready deadline being over a month away,” the letter reads.

Critiques were later read to Gebru from a confidential document. An email Gebru sent to colleagues on December 1 expressed frustration with the review process and a lack of progress in Google diversity initiatives.

“Have you ever heard of someone getting ‘feedback’ on a paper through a privileged and confidential document to HR? Does that sound like a standard procedure to you or does it just happen to people like me who are constantly dehumanized?” Gebru wrote in an email to the Google Brain Women and Allies internal listserv.

How Google treated Gebru and the pending paper have raised major questions about corporate influence on academic research. NeurIPS general chair Hugo Larochelle leads the Google Brain team in Montreal. In comments about the incident involving Gebru today, Larochelle commended her for being a cofounder of Black in AI and for her past works, but did not address a question about what implications the incident has on corporate influence over academic research.

Many AI researchers maintain relationships with academic institutions, but multiple studies have found an ongoing drain of intellectual talent from positions in academia to jobs at Big Tech companies, major businesses, or startups. In October, the State of AI report from Air Street Capital found that Google hired more tenure or tenure-track professors from U.S. universities than any other company between 2004 and 2018.


Author: Khari Johnson
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!