AI & RoboticsNews

AI Weekly: NIST proposes ways to identify and address AI bias

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


The National Institute of Standards and Technology (NIST), the U.S. agency responsible for developing technical metrics to promote “innovation and industrial competitiveness,” this week published a document outlining feedback and recommendations for mitigating the risk of bias in AI. The paper, about which NIST is accepting comments until August, proposes an approach for identifying and managing “pernicious” biases that can damage public trust in AI.

As NIST scientist Reva Schwartz, who coauthored the paper, points out, AI is transformative in its ability to make sense of data more quickly than humans. But as AI pervades the world, it’s becoming clear that its predictions can be affected by algorithmic and data biases. Making matters worse, some AI systems are built to model complex concepts that can’t be directly measured by data in the first place. For example, hiring algorithms use proxies — some of which are dangerously imprecise — like “area of residence” or “education level” — for the concepts they attempt to capture.

The effects are often catastrophic. Biases in AI have yielded wrongful arrests, racist recidivism scores, sexist recruitment, erroneous high school grades, offensive and exclusionary language generators, and underperforming speech recognition systems, to name a few injustices. Unsurprisingly, trust in AI systems is eroding. According to survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to trust AI systems in general.

Proposed framework

The NIST document lays out a framework to spot and address AI biases at different points in a system’s lifecycle, from conception, iteration, and debugging to release. It starts at the pre-design or ideation stage before moving onto design and development and, finally, deployment.

At the pre-design phase, since many of the downstream processes hinge on decisions made here, there’s a lot of pressure to “get things right,” the NIST coauthors note. Central to these decisions is who makes them and which people or teams have the most power or control over them, which can reflect limited points of view, affect later stages and decisions, and lead to biased outcomes.

For example, it’s an obvious risk to build predictive models for scenarios already known to be discriminatory, like hiring. Yet developers often don’t address the possibility of inflated expectations related to AI. Indeed, current assumptions in development often revolve around the idea of technological solutionism, the perception that technology will lead to only positive solutions.

The design and development phases present other, related sets of challenges. Here, data scientists are often singularly focused on performance and optimization, which can be sources of bias in their own rights. For instance, modelers will almost always select the most accurate machine learning models. But not taking context into consideration can lead to biased results for certain populations, as can the use of aggregated data about groups to make predictions about individual behavior. This latter type of bias, known as an “ecological fallacy,” unintentionally weights certain factors such that societal inequities are exacerbated.

The ecological fallacy is widespread in health care modeling, where much of the data used to train algorithms for diagnosing and treating diseases has been shown to perpetuate inequalities. Recently, a team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI are sourced from New York, California, and Massachusetts.

When AI systems reach the deployment phase — i.e., where people start interacting with them — poor decisions in the earlier phases start to have an impact, typically unbeknownst to the affected people. For example, by not designing to compensate for activity biases, algorithmic models may be built on data only from the most active users. The NIST coauthors peg the problem on the fact that groups who invent the algorithms are unlikely to be aware — sometimes willfully — of all the potentially problematic ways they’ll be repurposed. Beyond this, there are individual differences in how people interpret AI models’ predictions, which could cause the “offloading” of decisions to coarse, imprecise automated tools.

This is particularly evident in the language domain, where model behavior can’t be reduced to universal standards because “desirable” behavior differs by application and social context. A study by researchers at the University of California, Berkeley, and the University of Washington illustrates the point, showing that language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure that the models work better for them, for instance, which could discourage minority speakers from engaging with the models to begin with.

Tackling bias in AI

What’s to be done about the pitfalls? The NIST coauthors recommend pinpointing biases early in the AI development process by maintaining “diversity” — including racial, gender, age — along social lines, where bias is a concern. While they acknowledge that identifying impacts may take time and require the involvement of end-users, practitioners, subject matter experts, and professionals from the law and social sciences, the coauthors say that these stakeholders can bring experience to bear on the challenge of considering all possible outcomes.

The suggestions are aligned with a paper published last June by a group of researchers at Microsoft. It advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work, concluding that the machine learning research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful.

“Technology or datasets that seem non-problematic to one group may be deemed disastrous by others. The manner in which different user groups can game certain applications or tools may also not be so obvious to the teams charged with bringing an AI-based technology to market,” the NIST paper reads. “These kinds of impacts can sometimes be identified in early testing stages, but are usually very specific to the contextual end-use and will change over time.”

Beyond this, the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help root out biases. Requiring AI practitioners to defend their techniques, the coauthors posit, can incentivize new ways of thinking and help create change in approaches by organizations and industries.

Many organizations fall short of the mark. After a 2019 research paper demonstrated that commercially available facial analysis tools fail to work for women with dark skin, Amazon Web Services executives attempted to discredit study coauthors Joy Buolamwini and Deb Raji in multiple blog posts. More recently, Google fired leading AI researcher Timnit Gebru from her position on an AI ethics team in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices.

But others, particularly in academia, have taken preliminary steps. For instance, a new program at Stanford — the Ethics and Society Review (ESR) — is requiring AI researchers to evaluate their proposals for any potential negative impact on society before being green-lighted for funding. Starting in 2020, Stanford ran the ESR across 41 proposals seeking Stanford HAI grant funding. The panel most commonly identified issues of harm to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data. One research team that examined the use of ambient AI for in-home care for elderly adults wrote an ESR statement that considered privacy ethics in their research, outlining recommendations for the use of face blurring, body masking, and other methods to ensure participants were protected.

Finally, at the deployment phase, the coauthors make the case that monitoring and auditing are key ways to manage bias risks. There’s a limit to what this can accomplish — for example, it’s not clear whether “detoxification” methods can thoroughly debias language models of a certain size. However, techniques like counterfactual fairness, which uses causal methods to produce “fair” algorithms, can perhaps begin to bridge gaps between lab and real-world environments.

Comments on NIST’s proposed approach can be submitted by August 5, 2021, by downloading and completing a template form and sending it to NIST’s dedicated email account. The coauthors say that they’ll use the responses to help shape the agenda of virtual events NIST will hold in coming months, a part of the agency’s broader effort to support the development of trustworthy and responsible AI.

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear. We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause,” Schwarz said in a statement. “An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!