Elevate your enterprise data technology and strategy at Transform 2021.
This week, the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency that promotes measurement science, proposed a method for evaluating user trust in AI systems. The draft document, which is open for public comment until July 2021, aims to stimulate a discussion about transparency and accountability around AI.
The draft proposal comes after a key European Union (EU) lawmaker, Patrick Breyer, said that rules targeting Facebook, Google, and other large online platforms should include privacy rights as well as users’ right to anonymity. In April, the European Commission, the executive branch of the EU, announced regulations on the use of AI including strict safeguards on recruitment, critical infrastructure, credit scoring, migration, law enforcement algorithms. And on Tuesday, Amazon said it would extend until further notice a moratorium it imposed last year on police use of its facial recognition software.
Brian Stanton, a cognitive psychologist, coauthored the NIST publication with computer science researcher Ted Jensen. They largely base the premise on past studies on trust, beginning with the role of trust in human history and how it’s shaped our thought processes.
“Many factors get incorporated into our decisions about trust,” Stanton said in a statement. “It’s how the user thinks and feels about the system and perceives the risks involved in using it.”
Stanton and Jensen gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity. They posit a list of nine factors that contribute to a person’s potential trust in an AI system, including reliability, resiliency, objectivity, security, explainability, safety, accountability, and privacy. A person may weigh the factors differently depending on the task and the risk involved in trusting an AI’s decision. For example, a music selection algorithm might not need to be particularly precise, but it’d be a different story with an AI that was only 90% accurate in making a cancer diagnosis.
In the course of the draft, Stanton and Jensen find that if an AI system (1) has a high level of technical trustworthiness and (2) the values of the trustworthiness characteristics are perceived to be good enough for the context of use, especially the risk inherent in that context, then the likelihood of AI user trust increases. It’s this trust — based on user perceptions — that will be necessary of any human-AI collaboration, Stanton says.
“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton added. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”
Challenges ahead
The “black box” nature of AI remains a barrier to overcome, however, in light of research that finds people are naturally inclined to trust systems even when they’re malicious. In 2019, Himabindu Lakkaraju, a computer scientist at the Harvard Business School, and University of Pennsylvania research assistant Osbert Bastani created an AI system designed to mislead people. Their experiment confirmed the researchers’ hypothesis and showed how easily humans can be manipulated by opaque AI algorithms.
“We find that user trust can be manipulated by high-fidelity, misleading explanations. These misleading explanations exist since prohibited features (e.g., race or gender) can be reconstructed based on correlated features (e.g., zip code). Thus, adversarial actors can fool end users into trusting an untrustworthy black box [system] — e.g., one that employs prohibited attributes to make decisions,” the coauthors wrote.
Even when trust in an AI system is justified, the outcome isn’t necessarily desirable. In an experiment conducted by a team at IBM Research, researchers assessed how much showing people an AI prediction with a confidence score would impact their ability to predict a person’s annual income. The study found that the scores increased trust but didn’t improve decision-making, which might be predicated on whether a person can bring in enough unique knowledge to compensate for an AI system’s errors.
Stanton stresses that the ideas in the NIST publication are based on background research and would benefit from public scrutiny. From the body of literature highlighting the dangers in perceptions of trust in AI, this appears to be true — everything from hiring practices, loan applications, and the criminal justice system can be affected by biased but seemingly trustworthy algorithms, Solving AI’s “trust” problem will require thoroughly addressing this, as well as the systemic problems that come from a lack of diversity in AI as a whole.
“We are proposing a model for AI user trust,” he said. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Kyle Wiggers
AI Staff Writer
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat