AI & RoboticsNews

AI ethics research conference suspends Google sponsorship

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.

“FAccT is guided by a Strategic Plan, and the conference by-laws charge the Sponsorship Chairs, in collaboration with the Executive Committee, with developing a sponsorship portfolio that aligns with that plan,” Ekstrand told VentureBeat in an email. “The Executive Committee made the decision that having Google as a sponsor for the 2021 conference would not be in the best interests of the community and impede the Strategic Plan. We will be revising the sponsorship policy for next year’s conference.”

The decision followed days of questions about whether FAccT would continue its relationship with Google following the company’s treatment of Ethical AI team leaders. The news first emerged Friday, when FAccT program committee member Suresh Venkatasubramanian tweeted that the organization would pause its relationship with Google.

Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis released last year that compares Big Tech funding of AI ethics research to Big Tobacco’s history of funding health research found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.

After Gebru was fired, Googlers protested what they called an act of “unprecedented research censorship.” Last week, Reuters reported on a separate instance of alleged interference in AI research at the company, with a research paper coauthor citing “deeply insidious” edits by the Google legal team.

According to the FAccT website, Gebru, who was a cofounder of the organization, continues to work as part of a group advising on data and algorithm evaluation and as a program committee chair. Mitchell is a program co-chair of the conference and a FAccT program committee member. Gebru was fired from her role at Google in December 2020, following disputes over factors like the lack of diversity in tech companies and a paper she coauthored titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In addition to recognizing that pretrained language models may disproportionate harm marginalized communities, the work questions whether progress can really be measured by performance on benchmark tests. The paper also raises concerns about large language models’ potential for misuse or automation bias.

“If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads.

Gebru is listed as one of two primary authors of the paper, which was accepted this week for publication at FAccT. Her lead coauthor is University of Washington linguist Emily Bender, whose writing about potential shortcomings of large language models and the need for deeper criticism received an award last summer from the Association for Computational Linguistics.

A copy of the paper VentureBeat obtained last year from a source familiar with the matter lists Mitchell as a coauthor, as well as Google researchers Mark Diaz and Ben Hutchinson, a trio with backgrounds in language analysis and models. Mitchell may be known today for her work in ethics, but she is most highly cited as a computer vision and NLP researcher and is the author of a 2008 master’s thesis on text generation at the University of Washington. Ben Hutchinson worked with coauthors from the Ethical AI team at Google on a paper that found bias in NLP models disfavors people with disabilities in sentiment analysis and toxicity prediction. Mark Diaz has examined age-related bias found in text.

Bender and Gebru are listed as primary coauthors in various versions of the paper. A version of the paper made available ahead of the conference by the University of Washington also lists “Shmargaret Scmitchell” as an author.

Fallout from the firing of Gebru, a prominent algorithmic oppression researcher and one of the only Black women to work as an AI researcher at Google, led to public opposition from thousands of Googlers and accusations of racism and retaliation. The incident also sparked questions from members of Congress with a documented interest in regulating algorithms. And it led researchers to question the ethics of receiving ethics research funding from Google. Experts in AI, ethics, and law told VentureBeat a range of policy changes could come about as a result of Gebru’s dismissal, including support for stronger whistleblower laws. Shortly after being fired, Gebru spoke about the idea of unionization as a means of protection for AI researchers, and Mitchell was a member of the Alphabet Workers Union formed in January 2021.

OpenAI and Stanford University researchers working with experts warned last month that creators of large language models — like Google and OpenAI — have only a matter of months to set standards for their ethical use before replications begin to circulate.

Other papers published at FAccT this year include analysis of common obstacles to data-sharing practices in African nations, a review of an algorithm impact assessment made by Data & Society’s AI on the Ground team, and research that examines how government repression and censorship impact text data regularly used for training NLP models.

In other recent AI research conference activity, organizers of NeurIPS, the most popular annual machine learning conference, told VentureBeat the organization plans to revise its sponsorship policy following questions surrounding NeurIPS sponsor Huawei reportedly making a Uighur Muslim detection system for Chinese authorities.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!