AI & RoboticsNews

Trump faces executive order lawsuit as critical race theory fuels AI research

Today, civil rights groups — including the NAACP Legal Defense Fund — filed a lawsuit against the Trump administration on the grounds that a Trump executive order violates free speech rights and will “undermine efforts to foster diversity and inclusion in the workplace.” The lawsuit follows opposition to the executive order from a range of groups, including the U.S. Chamber of Commerce, as well as a federal agency’s recent intervention in a Microsoft diversity initiative launched amid calls for racial justice.

The executive order was part of the White House’s concerted attack on diversity training and critical race theory. Trump has called critical race theory “toxic propaganda” that will “destroy our country.” He also claimed that diversity training is designed to divide Americans and said students should instead receive a “patriotic education.” In early September, a Department of Labor memo directed federal agencies to cancel contracts with vendors that cover critical race theory or “white privilege” in their work, calling the intellectual movement “un-American” and “anti-American propaganda.”

And if you watched presidential debates between U.S. President Donald Trump and challenger Joe Biden, the words “artificial intelligence” or “tech” never came up, but the related subject of critical race theory did. During the debate, Trump reiterated his previous position, calling racial sensitivity training “racist” and claiming it teaches people to hate the United States. When given an opportunity to denounce racist views, he instead told the white supremacist group Proud Boys to “stand by.” Biden responded by calling Trump a racist and asserted that racial sensitivity training can make a big difference in fighting systemic racism.

The executive order, which Trump signed a week before the debate, threatened to cut federal funding to agencies and grant recipients that fail to comply. This led to confusion within federal agencies, and the University of Iowa temporarily paused diversity events. University of Michigan president Mark S. Schlissel objected to the order, arguing that diversity training is intended to bring people together. He called the executive order an attempt to prevent people from “confronting blind spots” and said his university remains committed to dismantling structural oppression.

Though Trump has recently made efforts to expel critical race theory, some AI researchers are advocating the lens as a way to evaluate the fairness of AI models.

History, politics, and critical race theory

Following these arguments requires a clear understanding of critical race theory, which invites scholars to consider the impacts of race, racism, and power, as an intellectual movement and a framework. The term came into being in the late 1970s and 1980s as writers like NYU School of Law professor Derrick Bell sought to understand why the civil rights movement had stalled and worked to address what activists and scholars saw as the rollback of progress. According to the book , written by Richard Delgado and Jean Stefancic, critical race theory draws lessons from civil rights, Black Power, and Chicano movements, as well as the work of individuals like Frederick Douglass, Sojourner Truth, Cesar Chavez, and Martin Luther King, Jr.

Critical race theory provided a sociological framework that first touched law and education but grew to encompass other fields, like public health, education, and ethnic studies. It includes the premise that racism has become normalized in the U.S. and is therefore more difficult to address. Critical race theory asserts that racial categories are a social construction and considers the problem of white privilege and the importance of intersectionality. Made popular by Kimberlé Crenshaw, intersectionality proposes that a person’s identity includes overlapping concepts of race, class, gender, religion, and sexual identity.

California has a history of leading the way in ethnic studies education. The first College of Ethnic Studies was created in California in 1969, following the longest student-led protests in U.S. history. Students of color have called ethnic studies vitally important to their education, and a 2016 Stanford University study found that ethnic studies classes improved attendance and grades for students at risk of dropping out of high school. These findings are particularly important since the average child born in America today is not white.

Governor Gavin Newsom signed a bill last summer that made ethnic studies a California State University undergraduate degree requirement, making California the first state in the nation to do so. But in early October, citing an “insufficiently balanced” model curriculum, Newsom vetoed a bill that would have required high school students to take at least one semester of ethnic studies in order to obtain a diploma. Bill author Assemblymember Jose Medina, also a Democrat, called the veto “a failure to push back against the racial rhetoric and bullying of Donald Trump.”

Google AI dives into sociology

President Trump may be on a campaign to suppress critical race theory, but the idea is taking hold at Google. One of the largest employers of AI research talent, Google is incorporating critical race theory into its tech development and fairness analysis processes, Google AI ethics co-lead Meg Mitchell told VentureBeat in a meeting with journalists last week.

“This was a bit of an intervention with our first social scientists. So our team has been able to get three social scientists, which are the first research scientists, ethnographers, people who have a lot of knowledge about gender and identity at Google looking at critical race theory to literally make this part of our development process,” she said.

This effort includes a research paper titled “Towards a Critical Race Methodology in Algorithmic Fairness,” which was published in December 2019 by four members of Google Research. Mitchell said the paper is some of Google’s first work on critical race theory.

The paper, which was presented earlier this year at the Fairness, Accountability and Transparency (FAccT) conference, urges the AI ethics research community to look to critical race theory when evaluating fairness research. The problem, coauthors of the paper argue, is that many modern algorithmic fairness frameworks lack historical and social context and use racial categorization in nondescript or decontextualized ways.

“While we acknowledge the importance of measuring race for the purposes of understanding patterns of differential performance or differentially adverse impact of algorithmic systems, in this work, we emphasize that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial category formation,” the paper reads. “To oversimplify is to do violence, or even more, to reinscribe violence on communities that already experience structural violence.”

Measuring fairness vs. seeking justice

While tracking race can be important to verify the absence of discrimination, simply deciding that algorithms should ignore race does not solve the issue, lead author and Google senior research scientist Alex Hanna told VentureBeat in an interview. Leaders in the field like Dr. Safiya Noble warn that attempts to remove race from the equation can actually perpetuate existing social hierarchies built on inequity.

“What I find so valuable about critical race theory is that it puts at the center of algorithmic fairness race in a way that algorithmic fairness often obviates it or ignores it,” Hanna said. “One of the things that I worry about most in this field is that there’s a conversation that gets had about fairness as a kind of metric that can be solved, rather than an invitation to an inquiry about justice and human flourishing and well-being and the destruction of white supremacist structures. And so that’s sort of the biggest thing I think we lose when we don’t adopt a critical race theory lens.”

Generally speaking, Hanna believes research at the intersection of race and technology is among some of the most important work to come out of the algorithmic fairness community. A notable example is the Gender Shades project, created by Google AI co-lead Timnit Gebru, Algorithmic Justice League founder Joy Buolamwini, and Deb Raji. Their landmark research found that facial recognition technology performs poorly on women with dark skin. Gender Shades has shaped perceptions of algorithmic fairness in Congress, as well as in cities that have implemented facial recognition bans, like San Francisco and Portland, Oregon.

Race, like fairness, is itself a contested concept, and it follows that a multidimensional approach is useful.

“We encourage algorithmic fairness researchers to explore how different racial dimensions and their attendant measurements might reveal different patterns of unfairness in sociotechnical systems,” the Google paper reads. “It is critical to expand the scope of analysis beyond the algorithmic frame and interrogate how patterns of racial oppression might be embedded in the data and model and might interact with the resulting system.”

Since the algorithmic fairness community first emerged, researcher Arvind Narayanan has identified 21 different ways to measure fairness. There’s statistical bias, group fairness, individual fairness, and a range of binary classification fairness metrics, but choosing a particular metric or prioritizing one metric over another is far from simple.

Hanna agrees that metrics come with inherent tradeoffs and believes these measures appeal to computer scientists’ desire to quantify problems. Instead, she believes people should ask what justice and remediation look like and consider how to address the harms of an aggrieved population.

Reimagining what’s possible with critical race theory

Perhaps the most influential work in AI that directly incorporates critical race theory is , a book by African American Studies associate professor Dr. Ruha Benjamin. The book considers the concept of a “New Jim Code” and warns that algorithms are automating bias and engineers must guard against the use of design practices that amplify racial hierarchies. Her call to reimagine technology stems from a critical race theory tenet that encourages multi-narrative storytelling.

While delivering a speech at the International Conference on Learning Representations (ICLR) earlier this year, Benjamin urged deep learning practitioners to consider social and historical context or risk becoming like IBM workers who played a role in enabling the Holocaust.

The Google Research paper argues that algorithmic fairness frameworks must begin from the perspectives of oppressed groups. In doing so, it joins a long line of recent works in algorithmic bias research, as more businesses and governments explore ways to put AI principles into practice.

In June, Microsoft Research conducted an analysis of the existing body of NLP bias research and implored the algorithmic fairness community to consider social hierarchies like racism when evaluating language models. This summer, drawing on Benjamin’s work, University of Oxford researchers introduced a paper titled “The Whiteness of AI,” in which they applied critical race theory to depictions of AI in science fiction and pop culture and concluded that these works tend to erase people of color.

In July, a trio of researchers from Google’s DeepMind presented a paper exploring ways to create decolonial AI and prevent the spread of algorithmic oppression. And earlier this month, a paper by Abeba Birhane and Olivia Guest found that decolonization of computer science requires seeing things from the perspective of Black women and other women of color. Doing so, the coauthors argue, can lead to fewer cases of machine learning research that is rooted in pseudoscience like eugenics or physiognomy, which infers characteristics from a person’s physical appearance. 

Final thoughts

Critical race theory belongs to a U.S. tradition of critical examination that has informed abolitionist and feminist movements. In ,  Delgado and Stefancic posit that critical race theory sprang from critical legal studies and radical feminism. The book draws a line from historical figures like W.E.B. DuBois and Ida B. Wells to more recent social movements. It also cites spinoff works from Latinx and queer critical scholars.

It seems a similar line can be drawn to protests of historic size in recent years, including Black Lives Matter protests and the #MeToo movement. Beyond AI and tech policy, it’s virtually impossible to consider the major issues of our time — COVID-19 deaths, continuing economic fallout, inequality in education, and the fate of essential workers during the pandemic — without viewing them through the prism of race.

When it comes to AI regulation and the impact of policy on people’s lives, ignoring historical and social context has been used to justify savage behavior and deny justice to those calling for an end to systemic inequality.

Critical race theory cofounder and New York University law professor Derrick Bell called racism a form of control for both Black and white people in the United States and said that “telling the truth as you see it is empowering.” He believed that calling the U.S. a white supremacist country is acknowledging a fact critical to healing, much as an alcoholic must admit they have a problem before beginning down the road to recovery.

Critical race theory acknowledges the existence of racism and power dynamics that are older than the United States and continue to shape American history. It’s also in line with what Bryan Stephenson, a man Desmond Tutu calls “America’s Mandela,” refers to when he encourages an honest accounting of our past and critical self-examination as a way to reconcile past transgressions.

From antitrust law reform to facial recognition regulation and other thorny tech policy topics facing the next leaders in Washington, D.C., the ways that AI can harm people will be front and center. And whether we call that framework restorative justice, critical race theory, or any other name, attempts to address the negative impact algorithms can have on human lives will be incomplete without critical review grounded in social and historical context. Rather than being a tool of division, critical examination is essential to building what Thomas Jefferson termed “a more perfect union.”


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how



Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!