AI & RoboticsNews

What explainable AI really means and what it means for your business (VB Live)

Presented by Dataiku


White-box AI is getting heaps of attention, in part because it brings business value to customers and companies alike. Learn why businesses are moving away from black box systems to more explainable AI — and what it really means to be explainable — when you catch up on this VB Live event.

Access free on demand right here.


White box and black box AI are getting a lot of attention in the media now, especially in the wake of cases late last year that shed light on biases in decision-making algorithms used by finance and health care companies.

But what is black box AI? It’s far more complex a subject than it seems on the surface.

“At the highest level, black box AI is a set of algorithms that produce decisions without a clear ‘why’ behind them,” says Triveni Gandhi, data scientist at Dataiku. “It’s output some prediction, but it doesn’t tell you how it got there — it just says, trust me.”

But to an end user, everything seems like a black box, says Rumman Chowdhury, global lead for responsible AI at Accenture Applied Intelligence. Which is where discussions around white box, or explainable AI, get interesting.

“When I think about how to ‘unpack’ a black box, I think about how to use the output of my model in a way that’s understandable to the person at the end of it, who is not always a data scientist,” says Chowdhury. “How to take that output and make it something understandable to, say, a business leader, someone in the C-suite, or someone calling customer service to understand why their credit line was not approved by a certain credit card.”

Or that can mean lawyers who need to understand the output of a model in a way that’s understandable and useful to them, so they can address issues of potential bias and liability from a legal perspective.

And if you’re not able to explain and collaborate with all the different personas in a business, or all the stakeholders involved, you’re just operating in silos, which is only going to replicate the problems black box can cause, including real-world issues like bias. Facial recognition is ground zero for that debate, because now we know that many of these models perform less well on people who are not white cis males, for instance.

The issue of explainability in the public sphere historically stems from the European Union’s GDPR, says Chowdhury, which explicitly states notions of explainability and transparency being required.

“Now, what they don’t do is tell us what that actually means,” she says. “Everybody is struggling with what truly is explainability.”

These are not new issues. The data privacy folks have been talking about things like informed consent for a long time, which is linked to the notion of explainability, Chowdhury adds. How can you create an “explanation” that’s understandable by your entire consumer base, which could be millions of people with different levels of education and background, who then have to give approval to a privacy agreement where they’re offering sensitive data? And where does governance and accountability come in?

What’s important, ultimately, is understanding the levels of impact and risk associated with having a black box model and how important the need for explainability is.

“If it’s a model, for example, deciding if someone gets a loan, most likely you don’t want that to be a black box model,” she says. “If it’s something that’s recommending shoes on a website, maybe it’s okay if it’s a black box model.”

There will be cases where you need to use a black box model, where a neural net is the most efficient model for the problem you’re trying to solve, but it’s not possible to explain the solution.

“Instead of trying to dumb down our models, govern the optimizations it’s using,” Gandhi says. “Is it optimizing for accuracy in a way that produces bias? Then we need to change the threshold, which can mitigate the problem without sacrificing the overall complexity of the model.”

In the context of AI models, black box AI really is a powerful tool, and as a result it can be easy to use in ways that are suboptimal, says David Fagnan, director of applied science at Zillow Offers. But they realized that their customers didn’t trust their black box machines making high-stakes home-buying decisions.

“In order to decide whether black box is always a bad thing, you first need to ask yourself, how high are the stakes?” he explains. “Where we landed was the development of a human plus machine combined system.”

Since then they’ve built several different iterations of assistive tools, and even gray box and a bit of white box tooling, to help with this combined human and machine decision-making, because in reality, black box to white box is a spectrum.

“You want to think about, what is your definition of explainability?” he says. “Who are you trying to explain your model and your predictions to? Is it a consumer? Is it your own scientists, your own model developers, or is it non-technical stakeholders?”

With that definition you can then quantify how explainable you are. You might then set a threshold and say, above this, we consider this regime black box. At the most explainable end, very aligned with human decision-making, very easy to interact with, that might be the white box category. And gray box is the in-between segment.

“But don’t fall into the trap of getting complacent by thinking we’ve solved a problem because we’ve quantified it,” warns Chowdhury. “The world changes. People change. Not everyone is doing everything with the best of intentions. There are malicious actors out there. Things can slip by if you’re not trying to be context-specific and being variable to the way the world changes.”

For a deeper dive into what explainable AI looks like, why it’s an essential metric for companies to consider, and valuable advice on moving toward more explainable AI, catch up now on this VB Live event!


Don’t miss out!

Access for free on demand here.


Key Takeaways:

  • How to make the data science process collaborative across the organization
  • How to establish trust from the data all the way through the model
  • How to move your business toward data democratization

Speakers: 

  • Triveni Gandhi, Data Scientist, Dataiku
  • David Fagnan, Director, Applied Science, Zillow Offers
  • Rumman Chowdhury, Global Lead for Responsible AI, Accenture Applied Intelligence
  • Seth Colaner, AI Editor, VentureBeat


Author: VB Staff.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!