AI & RoboticsNews

AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms

This summer has been littered with stories about algorithms gone awry. For one example, a recent study found evidence Facebook’s ad platform may discriminate against certain demographic groups. The team of coauthors from Carnegie Mellon University say the biases exacerbate socioeconomic inequalities, an insight applicable to a broad swath of algorithmic decision-making.

Facebook, of course, is no stranger to controversy where biased, discriminatory, and prejudicial algorithmic decision-making is concerned. There’s evidence that objectionable content regularly slips through Facebook’s filters, and a recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white. Civil rights groups claim that Facebook fails to enforce its hate speech policies, and a July civil rights audit of Facebook’s practices found the company failed to enforce its voter suppression policies against President Donald Trump.

In their audit of Facebook, the Carnegie Mellon researchers tapped the platform’s Ad Library API to get data about ad circulation among different users. Between October 2019 and May 2020, they collected over 141,063 advertisements displayed in the U.S., which they ran through algorithms that classified the ads according to categories regulated by law or policy — for example, “housing,” “employment,” “credit,” and “political.” Post-classification, the researchers analyzed the ad distributions for the presence of bias, yielding a per-demographic statistical breakdown.

The research couldn’t be timelier given recent high-profile illustrations of AI’s proclivity to discriminate. As was spotlighted in the previous edition of AI Weekly, the UK’s Office of Qualifications and Examinations Regulation used — and then was forced to walk back — an algorithm to estimate school grades following the cancellation of A-levels, exams that have an outsize impact on which universities students attend. (Prime Minister Boris Johnson called it a “mutant algorithm.”) Drawing on data like the ranking of students within a school and a school’s historical performance, the model lowered 40% of results from teachers’ estimations and disproportionately benefited students at private schools.

Elsewhere, in early August, the British Home Office was challenged over its use of an algorithm designed to streamline visa applications. The Joint Council for the Welfare of Immigrants alleges that feeding past bias and discrimination into the system reinforced future bias and discrimination against applicants from certain countries. Meanwhile, in California, the city of Santa Cruz in June became the first in the U.S. to ban predictive policing systems over concerns the systems discriminate against people of color.

Facebook’s display ad algorithms are perhaps more innocuous, but they’re no less worthy of scrutiny considering the stereotypes and biases they might perpetuate. Moreover, if they allow the targeting of housing, employment, or opportunities by age and gender, they could be in violation of the U.S. Equal Credit Opportunity Act, the Civil Rights Act of 1964, and related equality statutes.

It wouldn’t be the first time. In March 2019, the U.S. Department of Housing and Urban Development filed suit against Facebook for allegedly “discriminating against people based upon who they are and where they live,” in violation of the Fair Housing Act. When questioned about the allegations during a Capital Hill hearing last October, CEO Mark Zuckerberg said that “people shouldn’t be discriminated against on any of our services,” pointing to newly implemented restrictions on age, ZIP code, and gender ad targeting.

The results of the Carnegie Mellon study show evidence of discrimination on the part of Facebook, advertisers, or both against particular groups of users. As the coauthors point out, although Facebook limits the direct targeting options for housing, employment, or credit ads, it relies on advertisers to self-disclose if their ad falls into one of these categories, leaving the door open to exploitation.

Ads related to credit cards, loans, and insurance were disproportionately sent to men (57.9% versus 42.1%), according to the researchers, in spite of the fact more women than men use Facebook in the U.S. and that women on average have slightly stronger credit scores than men. Employment and housing ads were a different story. Approximately 64.8% of employment and 73.5% of housing ads the researchers surveyed were shown to a greater proportion of women than men, who saw 35.2% of employment and 26.5% of housing ads, respectively.

Users who chose not to identify their gender or labeled themselves nonbinary/transgender were rarely — if ever — shown credit ads of any type, the researchers found. In fact, across every category of ad including employment and housing, they made up only around 1% of users shown ads — perhaps because Facebook lumps nonbinary/transgender users into a nebulous “unknown” identity category.

Facebook ads also tended to discriminate along the age and education dimension, the researchers say. More housing ads (35.9%) were shown to users aged 25 to 34 years compared with users in all other age groups, with trends in the distribution indicating that the groups most likely to have graduated college and entered the labor market saw the ads more often.

The research allows for the possibility that Facebook is selective about the ads it includes in its API and that other ads corrected for distribution biases. Many previous studies have established Facebook’s ad practices are at best problematic. (Facebook claims its written policies ban discrimination and that it uses automated controls — introduced as part of the 2019 settlement — to limit when and how advertisers target ads based on age, gender, and other attributes.) But the coauthors say their intention was to start a discussion about when disproportionate ad distribution is irrelevant and when it might be harmful.

“Algorithms predict the future behavior of individuals using imperfect data that they have from past behavior of other individuals who belong to the same sociocultural group,” the coauthors wrote. “Our findings indicated that digital platforms cannot simply, as they have done, tell advertisers not to use demographic targeting if their ads are for housing, employment or credit. Instead, advertising must [be] actively monitored. In addition, platform operators must implement mechanisms that actually prevent advertisers from violating norms and policies in the first place.”

Greater oversight might be the best remedy for systems susceptible to bias. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

For years, some U.S. courts used algorithms known to produce unfair, race-based predictions more likely to label African American inmates at risk of recidivism. A Black man was arrested in Detroit for a crime he didn’t commit as the result of a facial recognition system. And for 70 years, American transportation planners used a flawed model that overestimated the amount of traffic roadways would actually see, resulting in potentially devastating disruptions to disenfranchised communities.

Facebook has had enough reported problems, internally and externally, around race to merit a harder, more skeptical look at its ad policies. But it’s far from the only guilty party. The list goes on, and the urgency to take active measures to fix these problems has never been greater.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


Author: Kyle Wiggers
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!