AI & RoboticsNews

From black box to white box: Reclaiming human power in AI

Presented by Dataiku


It’s hard to imagine what life was like before the peak of AI hype in which we currently find ourselves. But it was just a few years ago, in 2012, that Apple gave the world the first integrated version of Siri on the iPhone 4S, which people used to impress their friends by asking it banal questions. Google was just beginning to test its self-driving cars in Nevada. And the McKinsey Global Institute had recently released “Big data: The next frontier for innovation, competition, and productivity.”

On the starting blocks of the race to release the next big AI-powered thing, no one was talking about explainable AI. Doing it first, even if no one truly understood how it worked, was paramount. That McKinsey Global Institute report gave a small amount of foreshadowing, predicting that businesses in nearly all sectors of the U.S. economy had at least an average of 200 terabytes of stored data. Back then, some companies were even doing something with that data, but those applications were mostly behind-the-scenes or extremely specialized. They were projects — largely siloed off from the core functions — that were maybe for those new people called data scientists to worry about, but certainly not the core of the business.

In the years that followed, things took off. By late 2012, data scientist, as most people are sick of hearing by now, was dubbed the sexiest job of the 21st century, and data teams started working feverishly with the masses of data that companies were storing. In fact, the roots of today’s AI movement crept into our lives with little resistance, despite (or perhaps because of) the fact that in the grand scheme of things, very few people actually understood the fundamentals of data science or machine learning.

Today, people are refused or given loans, accepted or denied entrance to universities, offered a lower or higher price on car insurance, and more, all at the hands of AI systems that usually offer no explanations. In many cases, humans who work for those companies can’t even explain the decisions. This is black box AI, and consumers increasingly — and often unknowingly — find themselves at its mercy. The issue has garnered so much attention that Gartner put explainable AI on the Top 10 Data and Analytics Technology Trends for 2019.

To be clear, “black box” is not synonymous with “malicious.” There are plenty of examples of black box systems that are doing good things, like analyzing imagery in healthcare to detect cancers or other conditions. The point is that while these systems are potentially more accurate from a technological perspective, models where humans cannot explain the outcomes — no matter what they’re trying to predict — can be harmful to consumers and to businesses. Harm aside, people simply have a hard time trusting what cannot be explained. The aforementioned healthcare example is instructive here, as AI systems often have high technical accuracy, but people don’t trust the machine-generated results.

Fortunately, the AI paradigm is shifting in two ways. One is on the consumer side — with increased focus and scrutiny around AI regulation, privacy, ethics, trust, and interpretability moving to the forefront. Consumers are starting to hold companies responsible for the AI-based decisions they make — and that’s a good thing.

The other shift is the approach from businesses, which are being forced to change their strategy partially because of consumer preference or increased legislation, but also because scaling AI efforts in a sustainable way (i.e., in a way that will continue to provide value into the future and not present risks) fundamentally requires a white box approach.

In other words, companies are starting to take note that turning AI into a business asset happens with large-scale, transparent adoption across departments and use cases, not by hiring data scientists to churn out the most cutting-edge models and throwing those models over the proverbial wall for the business to use.

Power in AI is no longer about who can make the most complex or accurate black box model with the data at hand; it’s about creating white box models that serve business needs, with an acceptable level of accuracy, and results that practitioners, executives, and customers can explain and understand. From there, it comes down to educating the people who are interacting with these models to do what humans do best and what AI systems cannot do: make judgments about whether the outputs make sense in context and whether they are working as intended — ideally, in a fair and unbiased way.

After all, it’s still people who make decisions about building models; they choose the data and which algorithm to apply. Humans (thankfully) aren’t machines, but that also means they can introduce their own biases that ultimately impact how that model acts in real business scenarios.

From a practical standpoint, explainable AI happens at several levels. It all starts with building the model; some algorithms are inherently more interpretable than others, and explainability is increasingly a topic of machine learning research. But ultimately, models that could be explained by data scientists or machine learning researchers probably might not be easily explained by a customer service representative (CSR). That’s where the idea of data democratization comes into play.

What would it take to get a CSR to explain to customers why they’re paying a certain price for their car insurance? It comes back again to trust via transparency — not only trust that the systems with which the CSR interacts are providing them with the right data, but trust in the data itself. And on top of all that, trust in the model. To get there, the CSR needs to not only understand what data goes into models, but where that data comes from, what it means, and how it influences the results of the model.

Clearly, wide-scale explainable AI requires a massive shift in organizations’ approach to data science and AI from the top down, but also from the bottom up. It’s about upskilling all employees so that they understand data and the systems it powers. It’s about setting up processes that allow white box systems to be democratized and used by all. It’s about investing in the right technologies and tools that both technologists and non-technical people can interact with.

It’s only out of this fundamental shift that companies will start creating products and systems that consumers can trust and will continue to use. That will require those in the C-suite to support and learn from those on the front lines and in the trenches when it comes to working with customers, data, processes, and the rest. And, of course, technology like AI platforms can fill in the gaps and encourage collaboration from all sides.

Perhaps more importantly, it’s also out of this shift that everyone will start to have a broader understanding of AI and the power it holds. If everyone in every job, no matter what their technical ability or background, interacts with AI systems and has a basic understanding of how they work, we’ll be better off than we are today. People will have the ability to work smarter on things that matter, not harder on repetitive processes, and that fulfills one of the greatest promises of AI.

Ultimately, organizational change will lead to a change in the wider public, giving people the ability to hold businesses accountable for the machine learning-powered systems they build. Democratization of data and AI isn’t just necessary in the workplace and to build the businesses of tomorrow, but also to make the AI-driven world one that we all want to live in.

Florian Douetteau is CEO of Dataiku.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.


Author: Florian Douetteau, Dataiku
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!