AI & RoboticsNews

AI Weekly: Recognition of bias in AI continues to grow

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology — particularly AI — can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes — which usually include things like race, gender, sexuality, and nationality — the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and “clever algorithms,” they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged.

“Harmful biases have been found in algorithmic decision-making systems in contexts such as health care, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society,” the coauthors of the report write. “Many current algorithmic fairness techniques [propose] access to data on a ‘sensitive attribute’ or ‘protected category’ (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. [But] these demographic-based algorithmic fairness techniques [remove] broader questions of governance and politics from the equation.”

The PAI paper’s publication comes as organizations take a broader — and more critical — view of AI technologies, in light of wrongful arrestsracist recidivism, sexist recruitment, and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gebru, who was controversially ejected from Google over a study examining the impacts of large language models, launched the Distributed Artificial Intelligence Research (DAIR), which aims to ask question about responsible use of AI and recruit researchers from parts of the world rarely represented in the tech industry. Last week, the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including regular impact assessments and enforcement mechanisms to protect human rights. Meanwhile, New York University’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives are studying the impacts and applications of AI algorithms, as are Khipu, Black in AI, Data Science Africa, Masakhane, and Deep Learning Indaba.

Legislators, too, are taking a harder look at AI systems — and their potential to harm. The U.K.’s Centre for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of human oversight. The European Union has proposed regulations that would ban the use of biometric identification systems in public and prohibit AI in social credit scoring across the bloc’s 27 member states. Even China, which is engaged in several widespread, AI-powered surveillance initiatives, has tightened its oversight of the algorithms that companies use to drive their business.

Pitfalls in mitigating bias

PAI’s work cautions that efforts to mitigate bias in AI algorithms will inevitably encounter roadblocks, however, due to the nature of algorithmic decision-making. If optimizing for a goal that’s poorly defined, it’s likely that a system will reproduce historical inequity — possibly under the guise of objectivity. Attempting to ignore societal differences across demographic groups will work to reinforce systems of oppression because demographic data coded in datasets has an enormous impact on the representation of marginalized peoples. But deciding how to classify demographic data is an ongoing challenge, as demographic categories continue to shift and change over time.

“Collecting sensitive data consensually requires clear, specific, and limited use as well as strong security and protection following collection. Current consent practices are not meeting this standard,” the PAI report coauthors wrote. “Demographic data collection efforts can reinforce oppressive norms and the delegitimization of disenfranchised groups … Attempts to be neutral or objective often have the effect of reinforcing the status quo.”

At a time when relatively few major research papers consider the negative impacts of AI, leading ethicists are calling on practitioners to pinpoint biases early in the development process. For example, a program at Stanford — the Ethics and Society Review (ESR) — requires AI researchers to evaluate their grant proposals for any negative impacts. NeurIPS, one of the largest machine learning conferences in the world, mandates that coauthors who submit papers state the “potential broader impact of their work” on society. And in a whitepaper published by the U.S. National Institute of Standards and Technology (NIST), the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help identify problems.

Requiring AI practitioners to defend their techniques can incentivize new ways of thinking and help create change in approaches by organizations and industries, the NIST coauthors posit.

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended,” NIST scientist Reva Schwartz, a coauthor of the NIST paper, wrote. “All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where [a] model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital … step.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare

AI & RoboticsNews

Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

AI & RoboticsNews

AI21 Labs juices up gen AI transformers with Jamba

DefenseNews

Northrop says Air Force design changes drove higher Sentinel ICBM cost

Sign up for our Newsletter and
stay informed!