AI & RoboticsNews

3 keys to moving toward white-box, explainable AI (VB Live)

Presented by Dataiku


White-box AI boosts transparency, creates more collaborative data science, and improves trust within businesses and with customers — and it’s getting a lot of attention. Join this VB Live event to learn how, and why, businesses need to move away from black-box systems to more explainable AI.

Register here for free.


Black box AI went viral recently, when David Heinmeier Hansson, Basecamp’s CTO, vented on Twitter about the major discrepancy in credit limits he and his wife were given on their Apple Credit cards. He was offered a credit limit 20 times larger than she was — despite the fact that they file their tax returns together and her credit rating was better. When the pair called, the company’s representatives weren’t able to explain the discrepancy, saying that their AI algorithm decides every applicant’s limit.

The company put out a statement to the effect that they had full faith in their system. Unfortunately, the story gained more traction when Apple co-founder Steve Wozniak tweeted about the similar experience he and his wife had, despite filing taxes together. And more unfortunately, a Wall Street regulator opened an investigation into Goldman Sachs, the card’s issuer, to determine whether New York law regarding gender discrimination had been violated.

It’s not the first time bias in a black box AI algorithm has gotten a company in trouble — see the Amazon hiring algorithm that immediately started to discriminate against women — but it should be the last. With major cases like these grabbing headlines, consumers are getting increasingly antsy about the idea of simply handing over the reins to an algorithm that can’t be examined or questioned.

Companies are starting to look toward white box, or explainable AI. It’s not just to avoid headlines — it’s better for business in a variety of other fundamental ways. Biased algorithms happen when bad data is fed into them, but the impact isn’t felt until far down the line — often until it is too late. Fair, accountable, and transparent (FAT) machine learning keeps small errors from becoming magnified, including biased data. Understanding the fundamental workings of an algorithm simply helps to make the algorithm better and stronger straight from the start.

Development within your company also becomes a far more collaborative feat when pieces of a project can be passed back and forth between department lines, discussed, broken down, and recalibrated with full insight and transparency. Silos have long been singled out as bad for business in a wide variety of ways — silos for machine learning and AI projects have just as big, if not bigger consequences. Breaking down those silos have even greater benefits.

It future proofs your tech, too — rather than unspooling all the work you’ve done when an error comes online, you’re able to pinpoint the issues and fix, repair, and improve from the inside out — again, creating a stronger, more trustworthy product.

Climbing out of the black box does take a little effort and some company buy-in, however. There are three vital components required to be successful when taking all your AI projects into the light: turning your data science into a collaborative effort between all your lines of business, not just IT; data that’s trustworthy, and tools that can be cleanly audited; and the democratization of data throughout your company — silos broken down, and all employees entrusted with education in machine learning and AI, how it works and why it is important.

White-box AI has demonstrable business value, not just for your company and your bottom line, but as an effort to build toward a more responsible AI in every sector of business, commerce, and society.

To learn more about why white-box AI is an essential evolution of artificial intelligence, the risks that white-box AI mitigates, how to take your first steps toward explainable AI and more, don’t miss this VB Live event.


Don’t miss out!

Register here for free.


Key takeaways:

  • How to make the data science process collaborative across the organization
  • How to establish trust from the data all the way through the model
  • How to move your business toward data democratization

Speakers:

  • Triveni Gandhi, Data Scientist, Dataiku
  • Seth Colaner, AI Editor, VentureBeat


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!