AI & RoboticsNews

AI Weekly: The Biden administration, algorithmic bias, and restoring the soul of America

This week, Joe Biden made his case to the American people about how to restore the soul of America and talked about “what we owe our forebears, one another, and generations to follow.”

Biden chose to run for president in reaction to the white supremacist rally in Charlottesville, South Carolina, and on Wednesday he became the first president to mention white supremacy in a presidential inauguration speech, calling for the defeat of domestic terrorism and the social hierarchy older than the United States. In his speech, Biden stressed virtues like tolerance, humility, empathy, and, above all, unity.

There was a prayerful moment of silence for 400,000 lost to COVID-19 and the message that defeating the forces that divide us is what’s required in order to make the U.S. a “leading force for good in the world.” It was, as the New York Times Daily podcast called it, not unlike a church sermon.

But the Biden administration enters office with one of the longest to-do lists in U.S. history. From COVID-19 to economic recovery to inequality wrought from America’s original sin of racism, and following a lack of executive leadership and a coup attempt, the Biden administration has to address multiple crises, but artificial intelligence is part of the agenda in several major ways set to play out in the coming weeks and the next four years.

Applying civil rights to tech policy

Democratic control of both houses of Congress means new legislation could be on the way to address a range of tech policy issues. Some early signs point heavily to the fact that the Biden administration plans to handle enforcement of existing law and regulation in very different ways than the Trump administration, particularly on issues like algorithmic bias, according to FTC Commissioner Rebecca Slaughter.

“I think there’s a lot of unity on the Democratic side and a lot of consensus about the direction that we need to go,” Slaughter said while speaking as part of a Protocol panel conversation about tech and the first 100 days of the Biden administration. On Thursday, Biden appointed Slaughter to be the acting chair of the Federal Trade Commission (FTC). “For me, algorithmic bias is an economic justice issue. We see disparate outcomes coming out of algorithmic decision-making that disproportionately affect and harm Black and brown communities and affect their ability to participate equally in society. That’s something we need to address.”

Speaking as a commissioner, she said one of her priorities is centering enforcement on anti-racist practices and confronting unfair market practices that disproportionately impact people of color. This will include treating antitrust enforcement and unfair market practices as racial justice issues.

Brookings Institution senior fellow and Center for Technology Innovation director Nicol Turner Lee also spoke during the panel conversation. Without attention to issues like algorithmic bias or data privacy, “we actually run the risk of going backwards.” The question becomes, Lee said, what kind of policy and enforcement support will the Biden administration put toward that aim.

“There’s no reason that you couldn’t start in this administration applying every existing civil rights statute to tech. Period. When you design a credit analysis tool that relies on algorithms, make sure it’s compliant with the Fair Credit Reporting Act. Going to design a housing tool? Make sure it complies with the Fair Housing Act. To me that’s a simple start that actually had some traction in Congress,” Lee said.

Earlier this month, Biden appointed civil rights attorneys Vanita Gupta and Kristen Clarke as associate attorney general and assistant attorney general for civil rights respectively. Both have a history of challenging algorithmic bias at companies like Facebook, Google, and Twitter. In testimony and letters to Congress in recent years, Gupta has stressed that machine learning “must protect civil rights, prevent discrimination, and advance equal opportunity.”

Finally, last week Biden said he planned to elevate the position of science advisor and Office of Science and Technology Policy (OSTP) head Dr. Eric Lander to cabinet level. Dr. Alondra Nelson will act as OSTP deputy director for science and society. AI, she said in a ceremony with President Biden and Vice President Harris, is technology that can “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue.”

“When we provide inputs to the algorithm; when we program the device; when we design, test, and research; we are making human choices, choices that bring our social world to bear in a new and powerful way,” she said.

In the first hours of his administration, President Biden signed an executive order to advance racial equality that instructs the OSTP to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.

Confronting white supremacy in AI

The Biden administration comes into office amid signs of slow progress toward addressing risks associated with deploying AI and recent events that seem to signal the collapse of AI ethics at Google. According to a 2020 McKinsey survey, business leaders are slowly addressing 10 major risks associated with artificial intelligence at glacial rates that seem comparable to the lack of progress on diverse hiring in tech.

Understanding the role of white supremacy in the insurrection seems essential to interrogate as part of the future of democracy in the United States. But links to white supremacy have also been found in the AI industry, and the white default in the intelligence industry persists after a year of efforts to interrogate artificial whiteness and anti-Blackness in artificial intelligence.

Examples of how AI can be found in Biden policy goals include *addressing* the ongoing spread of disinformation and hate speech for profit on Facebook and YouTube, as well as debate happening now over facial recognition.

Another example comes from Clearview AI, a company built on billions of images scraped from the internet without permission. Clearview AI CEO Hoan Ton-That says its tech is currently used by thousands of police departments and, according to Gothamist reporting this week, more than 100 prosecutor’s offices in the United States.

In comments this week, Ton-That said his identity as a person of race as a reason why he’s committed to “non-biased technology,” but Clearview AI has a history of ties with white supremacist groups and seeking out government contracts.

Clearview AI usage reportedly went up following the insurrection two weeks weeks ago. Policy analysts with a history of sponsoring legislation to regulate AI on human rights grounds warned VentureBeat earlier this month that use of facial recognition to find white supremacists involved with the insurrection can lead to the proliferation of technology in society that ultimately impacts Black people.

Healing wounds and making history

In his inauguration speech Wednesday, Biden said “the U.S. will lead not only by the example of our power but by the power of our example.” Many major AI issues will have to be addressed during Biden’s time in office.

The Biden administration may oversee more use of complex AI models by the U.S. government. According to a study released roughly one year ago by Stanford and New York University, only 15% of AI used by federal agencies is considered highly sophisticated. The Biden administration will also take part in upcoming talks about lethal autonomous weapons, a subject European politicians addressed this week. The final recommendations of the National Security Council on AI, a group appointed by Congress with commissions representing Big Tech executives, is due out later this year.

There’s also the need to, as one researcher put it, introduce legal intervention to provide redress and more definitively answer the question of who is held responsible when AI hurts people.

The ceremony in Washington this week, of course, was not just notable in keeping the tradition of a peaceful transfer of power intact. The first woman in U.S. history was sworn in as vice president, Kamala Harris. Then hours later, she swore in Jon Ossoff, the youngest senator in generations; Raphael Warnock, the first Black man elected to the U.S. Senate by voters from a southern state; and Alex Padilla, the first Latino to represent the state of California in the Senate.

It was a statement of commitment to a multiracial democracy where everyone is treated equally and a reestablishment of the rule of law two weeks after a white supremacist coup attempt. Part of keeping that promise — and, as Biden said, leading by example — will be addressing ways in which algorithmic decision making systems and machine learning can harm people.

That spirit is also seen in how Biden decorated the Oval Office with busts of icons like Cesar Chavez, Eleanor Roosevelt, MLK Jr., and Rosa Parks. But he also brought a moon rock collected by NASA into the Oval Office and made Presidential Science Advisor a cabinet-level position.

How the Biden administration chooses to treat the ways in which AI is used in society doesn’t just have the potential to affect how businesses, governments, and law enforcement adopt and use the technology in the United States. It also determines the moral clarity with which the United States can declare that, for example, the way China treats Muslim minority groups is wrong and must change, which the outgoing and incoming presidential administrations both call a genocide. After all, the U.S. would have little ground to stand on for an argument about China using surveillance technology to accelerate imprisonment of a minority group if the United States chooses to do the same.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!