AI & RoboticsNews

Even experts are too quick to rely on AI explanations, study finds

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


As AI systems increasingly inform decision-making in health care, finance, law, and criminal justice, they need to provide justifications for their behavior that humans can understand. The field of “explainable AI” has gained momentum as regulators turn a critical eye toward black-box AI systems — and their creators. But how a person’s background can shape perceptions of AI explanations is a question that remains underexplored.

A new study coauthored by researchers at Cornell University, IBM, and the Georgia Institute of Technology aims to shed light on the intersection of interpretability and explainable AI. Focusing on two groups — one with an AI background and one without — they found that both tended to over-trust AI systems and misinterpret explanations for how AI systems arrived at their decisions.

“These insights have potential negative implications like susceptibility to harmful manipulation of user trust,” the researchers wrote. “By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in explainable AI, our work takes a formative step in advancing a pluralistic human-centered explainable AI discourse.”

Explainable AI

Although the AI community has yet to reach a consensus on the meaning of explainability and interpretability, explainable AI shares the common goal of making the systems’ predictions and behaviors easier for people to understand. For example, explanation generation methods, which leverage a simple version of a model to be explained or meta-knowledge about the model, aim to elucidate a model’s decisions by providing plain-English rationales that non-AI experts can understand.

Building on prior research, the coauthors hypothesized that factors like cognitive load and general trust in AI could affect how users perceive AI explanations. For example, a study accepted at the 2020 ACM on Human Computer Interaction discovered that explanations could create a false sense of security and over-trust in AI. And in another paper, researchers found that data scientists and business analysts perceived an AI system’s accuracy score differently, with analysts inaccurately viewing the score as a measure of overall performance.

To test their theory, the Cornell, IBM, and Georgia Institute of Technology coauthors designed an experiment in which participants watched virtual robots carry out identical sequences of actions that differed only in the way the robots “thought out loud” about their actions. In the video game-like scenario, the robots had to navigate through a field of rolling boulders and a river of flowing lava, retrieving essential food supplies for trapped space explorers.

Above: The video game-like environment the researchers created for their experiment.

One of the robots explained the “why” behind its actions in plain English, providing a rationale. Another robot stated its actions without justification (e.g., “I will move right”), while a third only gave numerical values describing its current state.

Participants in the study — 96 college students enrolled in computer science and AI courses and 53 Amazon Mechanical Turk users — were asked to imagine themselves as the space explorers. Stuck on a different planet, they had to remain inside a protective dome, their only source of survival a remote depot with the food supplies.

The researchers found that participants in both groups tended to place “unwarranted” faith in numbers. For example, the AI group participants often ascribed more value to mathematical representations than was justified, while the non-AI group participants believed the numbers signaled intelligence — even if they couldn’t understand the meaning. In other words, even among the AI group, people associated the mere presence of statistics with logic, intelligence, and rationality.

“The AI group overly ascribed diagnostic value in [the robot’s] numbers even when their meaning was unclear,” the researchers concluded in the study. “Such perceptions point to how the modality of expression … impacts perceptions of explanations from AI agents, where we see projections of normative notions (e.g., objective versus subjective) in judging intelligence.”

Both groups preferred the robots that communicated with language, particularly the robot that gave rationales for its actions. But this more human-like communication style caused participants to attribute emotional intelligence to the robots, even in the absence of evidence that the robots were making the right decisions.

The takeaway is that the power of AI explanations lies as much in the eye of the beholder as in the minds of the designer. Peoples’ explanatory intent and common heuristics matter just as much as the designer’s intended goal, according to the researchers. As a result, people might find explanatory value where designers never intended.

“Contextually understanding the misalignment between designer goals and user intent is key to fostering effective human-AI collaboration, especially in explainable AI systems,” the coauthors wrote. “As people learn specific ways of doing, it also changes their own ways of knowing — in fact, as we argue in this paper, people’s AI background impacts their perception of what it means to explain something and how … The ‘ability’ in explain-ability depends on who is looking at it and emerges from the meaning-making process between humans and explanations.”

Importance of explanations

The results are salient in light of efforts by the European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” Explainability continues to present major hurdles for companies adopting AI. According to FICO, 65% of employees can’t explain how AI model decisions or predictions are made.

Absent carefully designed explainability tools, AI systems have the potential to inflict real-world harms. For example, a Stanford study speculates that clinicians are misusing AI-powered medical devices for diagnosis, leading to outcomes that differ from what would be expected. A more recent report from The Makeup uncovered biases in U.S. mortgage-approval algorithms, leading lenders to turn down people of color more often than applicants who are white.

The coauthors advocate taking a “sociotechnically informed” approach to AI explainability, incorporating things like socio-organizational context into the decision-making process. They also suggest investigating ways to mitigate manipulation of the perceptual differences in explanations, as well as educational efforts to ensure experts hold a more critical view of AI systems.

“Explainability of AI systems is crucial to instill appropriate user trust and facilitate recourse. Disparities in AI backgrounds have the potential to exacerbate the challenges arising from the differences between how designers imagine users will appropriate explanations versus how users actually interpret and use them,” the researchers wrote.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
DefenseNews

Lockheed to supply Australia with air battle management system

DefenseNews

First upgraded F-35s won’t be ready for combat until next year

DefenseNews

US Army faces uphill battle to fix aviation mishap crisis

Cleantech & EV'sNews

GreenPower just launched this versatile electric utility truck

Sign up for our Newsletter and
stay informed!