AI & RoboticsNews

Why responsible AI needs to disrupt your org from the bottom up (VB)

Presented by Dataiku


White-box AI is now getting heaps of attention. But what does it mean in practice? And how can businesses start moving away from black-box systems to more explainable AI? Learn why white-box AI brings business value, and how it’s a necessary evolution when you join this VB Live event.

Register here for free.


Black box AI has been getting some attention in the media for how it has produced undesirable, even unethical results. But the conversation is so much more complex, says Rumman Chowdhury, managing director at Accenture AI. When technologists or data scientists talk about black box algorithms, what they’re specifically referring to is a class of algorithms for which we don’t always understand how the output is achieved, or non-understandable systems.

“Just because something is a black box algorithm, it doesn’t necessarily mean it’s irresponsible,” Chowdhury says. “There are all sorts of interesting models one can apply to make output explainable — for example, the human brain.”

This is why black box AI systems actually have an important relationship with responsible AI, she explains. Responsible AI can be used to understand and unpack a black box system, even if it’s inside that black box algorithm.

“When people on the receiving end of a model’s output talk about explainability, what they actually want to do is understand,” says Chowhury. “Understanding is about explaining the output in a way that is helpful at a level that will be beneficial to the user.”

For instance, in the case of the Apple Card discussion, where the sexist algorithm offered lower credit to a woman than to her husband, he was told by the customer service agent that they just didn’t know why that happened. The algorithm simply said so. So it’s not just about the data scientist understanding. It’s about a customer service representative being able to explain to a customer why the algorithm arrived at an output and how it impacts the conclusions, rather than a high-level “How do we unpack a neural network?” discussion, explains Chowdhury.

“Properly done, understandable AI, explained well, is about enabling people to make the right decisions and the best decisions for themselves,” she says.

To realize the benefits of innovation and navigate potential negative consequences, the most important thing companies must do is establish cross-functional governance. Responsible thinking should be infused at every step of the process, from when you’re thinking about a project, the ideation stage, all the way to development, deployment, and use.

“When we responsibly develop and implement AI, we’re thinking about not just what we deliver to our clients, but what we do for ourselves,” says Chowhury. “We recognize there isn’t a one-size-fits-all approach.”

The biggest challenge of implementing responsible AI or ethical AI is usually that it seems like a very broad, and very daunting undertaking. From the start, there’s the worry about media attention. But then the more complicated questions arise — what does it actually mean to be responsible or ethical? Does it mean legal compliance, a change in company culture, etc.?

When establishing ethical AI, it’s helpful to break it down into four pillars: technical, operational, organizational, and reputational.

Companies most often understand the technical component: How do I unpack the black box? What is the algorithm about?

The operational pillar is perhaps the most essential, and governs the overall structure of your initiative. It’s about creating the right kind of organizational and company structure.

That then bleeds into the third pillar, organization, which is about how you hire the right kind of people, how you create cross-functional governance. Then finally the last pillar, reputational, requires being thoughtful and strategic about how you talk about your AI systems, how you enable your customers to trust you to share their information and engage with AI.

“The need for explainable, responsible AI changes the field of data science in a very important way,” Chowdhury says. “In order to create models that are understandable and explainable, data scientists and client teams are going to have to engage very deeply. Customer-facing people have to be involved in the early stage of development. I think data science as a field is going to grow and evolve to needing people who specialize in algorithmic critique. I’m quite excited to see that happen.”

To learn more about how companies can create a culture of responsible, ethical AI, the challenges involved in unpacking a black box, from the organizational to the technical, and how to launch your own initiative, don’t miss this VB Live event.


Don’t miss out!

Register here for free.


Key takeaways:

  • How to make the data science process collaborative across the organization
  • How to establish trust from the data all the way through the model
  • How to move your business toward data democratization

Speakers: 

  • Rumman Chowdhury, Managing Director, Accenture AI
  • David Fagnan, Director, Applied Science, Zillow Offers
  • Triveni Gandhi, Data Scientist, Dataiku
  • Seth Colaner, AI Editor, VentureBeat


Author: VB Staff
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!