AI & RoboticsNews

The quest for explainable AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand.

This “black box” characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the “why” of any particular action is often more important than the “what.”

A peek under the hood

This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI’s logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors. 

As tech writer Scott Clark noted on CMSWire recently, explainable AI provides necessary insight into the decision-making process to allow users to understand why it is behaving the way it is. In this way, organizations will be able to identify flaws in its data models, which ultimately leads to enhanced predictive capabilities and deeper insight into what works and what doesn’t with AI-powered applications.

The key element in XAI is trust. Without that, doubt will persist within any action or decision an AI model generates and this increases the risk of deployment into production environments where AI is supposed to bring true value to the enterprise.

According to the National Institute of Standards and Technology, explainable AI should be built around four principles:

  • Explanation – the ability to provide evidence, support or reasoning for each output;
  • Meaningfulness – the ability to convey explanations in ways that users can understand;
  • Accuracy – the ability to explain not just why a decision was made, but how it was made and;
  • Knowledge Limits – the ability to determine when its conclusions are not reliable because they fall beyond the limits of its design.

While these principles can be used to guide the development and training of intelligent algorithms, they are also intended to guide human understanding of what explainable means when applied to what is essentially a mathematical construct.

Buyer beware of explainable AI

The key problem with XAI currently, according to Fortune’s Jeremy Kahn, is that it has already become a marketing buzzword to push platforms out the door rather than a true product designation developed under any reasonable set of standards. 

By the time buyers realize that “explainable” may simply mean a raft of gibberish that may or may not have anything to do with the task at hand, the system has been implemented and it is very costly and time-consuming to make a switch. Ongoing studies are finding faults with many of the leading explainability techniques as too simplistic and unable to elucidate why a given dataset was deemed important or unimportant to the algorithm’s output.

This is partly why explainable AI is not enough, says Anthony Habayeb, CEO of AI governance developer Monitaur. What’s really needed is understandable AI. The difference lies in the broader context that understanding has over explanation. As any teacher knows, you can explain something to your students , but that doesn’t mean they will understand it, especially if they lack an earlier foundation of knowledge required for comprehension. For AI, this means users should not only have transparency into how the model is functioning now, but how and why it was selected for this particularly task; what data went into the model and why; what issues arose during development and training and a host of other issues.

At its core, explainability is a data management problem. Developing the tools and techniques to examine AI processes at such a granular level to fully understand them and do this in a reasonable timeframe, will not be easy, or cheap. And likely it will require an equal effort on the part of the knowledge workforce to engage AI in a way it can understand the often disjointed, chaotic logic of the human brain.

After all, it takes two to form a dialogue.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Arthur Cole
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!