This week, Timnit Gebru, a leading AI researcher, was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending an email to colleagues critical of the company’s managerial practices. Reportedly the flashpoint was a paper Gebru coauthored that questioned the wisdom of building large language models and examined who benefits from (and who’s disadvantaged by) them.
Google AI lead Jeff Dean wrote in an email to employees following Gebru’s departure that the paper didn’t meet Google’s criteria for publication because it lacked reference to recent research. But from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others. A draft obtained by VentureBeat discusses risks associated with deploying large language models including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.
Indeed, Gebru’s work appears to build on a number of recent studies examining the hidden costs of training and deploying large-scale language models. A team from the University of Massachusetts at Amherst found that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly five times the lifetime emissions of the average U.S. car. It’s a scientific fact that impoverished groups are more likely to experience significant health issues related to environmental concerns, with one study out of Yale University finding poor communities and those comprised predominantly of racial minorities experienced substantially higher exposure to air pollution compared to nearby affluent, white neighborhoods.
Gebru’s and colleagues’ assertion that language models can spout toxic content is similarly grounded in extensive prior research. In the language domain, a portion of the data used to train models is frequently sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.
In his email, Dean accused Gebru and the paper’s other coauthors of disregarding advances showing greater efficiencies in training that might mitigate carbon impact and failing to take into account recent research to mitigate language model bias. But this seems disingenuous. In a paper published earlier this year, Google trained a massive language model — GShard — using 2,048 of its third-generation tensor processing units (TPUs), chips custom-designed for AI training workloads. One estimate pegs the wattage of a single TPU at around 200 watts per chip, suggesting that GShard required an enormous amount of power to train. And on the subject of bias, OpenAI, which made GPT-3 available via an API earlier this year, has only begun experimenting with safeguards including “toxicity filters” to limit harmful language generation.
In the draft paper, Gebru and colleagues reasonably suggest that large language models have the potential to mislead AI researchers and prompt the general public to mistake their text as meaningful, when the contrary is true. (Popular natural language benchmarks don’t measure AI models’ general knowledge well, studies show.) “If a large language model … can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “We advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.”
It’s no secret that Google has commercial interests in conflict with the viewpoints expressed in the paper. Many of the large language models it develops power customer-facing products including Cloud Translation API and Natural Language API. The company often touts its work in AI ethics and seemingly tolerated — if reluctantly — internal research critical of its approaches in the past. But the letting go of Gebru would appear to mark a shift in thinking among Google’s leadership, particularly in light of the company’s crackdowns on dissent, most recently in the form of illegal spying on employees before firing them. In any case, it bodes poorly for Google’s openness to debate about critical issues around AI and machine learning. And given its outsize influence in the research community, the effects could be far-ranging.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers— and be sure to subscribe to the AI Weekly newsletter and bookmark our The Machine.
Author: Kyle Wiggers
Source: Venturebeat