AI & RoboticsNews

Researchers say ‘The Whiteness of AI’ in pop culture erases people of color

Depictions of artificial intelligence in popular culture as mostly white can carry a number of consequences, including the erasure of people who are not white, according to research released today by researchers from the University of Cambridge. The authors say the normalization of predominant depictions of AI as white can influence people aspiring to enter the field of artificial intelligence as well as managers making hiring decisions, and can cause serious repercussions. They say whiteness is not seen as merely an AI assistant with a stereotypically white voice or a robot with white features, but as the absence of color, the treatment of white as default.

“We argue that AI racialized as White allows for a full erasure of people of color from the White utopian imagery,” reads the paper titled “The Whiteness of AI,” which was accepted for publication by the journal The power of Whiteness’s signs and symbols lies to a large extent in their going unnoticed and unquestioned, concealed by the myth of color-blindness. As scholars such as Jessie Daniels and Safiya Noble have noted, this myth of color-blindness is particularly prevalent in Silicon Valley and surrounding tech culture, where it serves to inhibit serious interrogation of racial framing.”

Authors of the paper are Leverhulme Centre for the Future of Intelligence executive director Stephen Cave and principal investigator Kanta Dihal. The organization, based at Cambridge, is also represented at the University of Oxford, Imperial College London, and the University of California, Berkeley. Cave and Dihal document overwhelmingly white depictions of AI in the stock imagery used to depict artificial intelligence in media, in humanoid robots seen in television and films, in science fiction dating back more than a century, and in chatbots and virtual assistants. White depictions of AI were also prevalent in Google search results for “artificial intelligence robot.”

They warn that a view of AI as default white can distort people’s perception of the risks and opportunities associated with predictive machines being proliferated throughout business and society, causing some to look at the questions exclusively through the point of view of middle-class white people. Algorithmic bias has been documented in a range of technology in recent years, from automatic speech detection systems and popular language models to health care, lending, housing, and facial recognition. Bias has been found for people based on not just race and gender, but also on occupation, religion, and sexual identity.

A 2018 study found that a majority of participants ascribe a racial identity to a robot based on the color of the machine’s exterior, while another 2018 research paper found that participants in a study involving Black, East Asian, and white robots were twice as likely to use dehumanizing language when interacting with Black and East Asian robots.

Exceptions to a white default in popular culture include robots of different racial makeups in recent works of science fiction such as HBO’s Westworld and the Channel 4 series Humans. Another example, the robot Bender Rodriguez from the cartoon Futurama, was assembled in Mexico, but is voiced by a white actor.

“The Whiteness of AI” makes its debut after the release of a paper in June by UC Berkeley Ph.D. student Devin Guillory about how to combat anti-Blackness in the AI community. In July, Harvard University researcher Sabelo Mhlambi introduced the Ubuntu ethical framework to combat discrimination and inequality, and researchers from Google’s DeepMind shared the concept of anticolonial AI, work that was also published in the journal Philosophy and Technology. Both of those works champion AI that empowers people instead of reinforcing systems of oppression or inequality.

Cave and Dihal call an anticolonial approach a potential solution in the fight against AI’s problem with whiteness. At the ICML Queer in AI workshop last month, DeepMind research scientist and anticolonial AI paper coauthor Shakir Mohamed also suggested queering machine learning as a way all people can bring more equitable forms of AI into the world.

The paper published today, as well as several of the above works, heavily cite Princeton University associate professor Ruha Benjamin and UCLA associate professor Safiya Noble.

Cave and Dihal attribute the white AI phenomena in part to a human tendency to give inanimate objects human qualities, as well as the legacy of colonialism in Europe and the U.S. which uses claims of superiority to justify oppression. The prevalence of whiteness in AI, they argue, also shapes some depictions of futuristic utopias in science fiction. “Rather than depicting a post-racial or colorblind future, authors of these utopias simply omit people of color,” they wrote.

Cave and Dihal say whiteness even shapes perceptions of what a robot uprising might look like, embodying attributes like power and intelligence. “When White people imagine being overtaken by superior beings, those beings do not resemble those races they have framed as inferior. It is unimaginable to a White audience that they will be surpassed by machines that are Black. Rather, it is by superlatives of themselves: hyper-masculine White men like Arnold Schwarzenegger as the Terminator, or hyperfeminine White women like Alicia Vikander as Ava in Ex Machina,” the paper reads. “This is why even narratives of an AI uprising that are clearly modelled on stories of slave rebellions depict the rebelling AIs as White.”

Additional investigation of the impact of whiteness on the field of artificial intelligence is needed, the authors said.

In other recent news at the intersection of ethics and AI, a bill introduced in the U.S. Senate this week by Bernie Sanders (I-VT) and Jeff Merkley (D-OR) would require consent for private companies to collect biometric data used to create tech like facial recognition or voice prints for personalization with AI assistants, while a team of researchers from Element AI and Stanford University suggest academic researchers stop using Amazon Mechanical Turk in order to create more practically useful AI assistants. Last week, Google AI released its Model Cards template for people to quickly adopt a standard method of detailing the contents of data sets.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!

Worth reading...
Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data