AI & RoboticsNews

Turtles all the way down: Why AI’s cult of objectivity is dangerous, and how we can be better

Join today’s leading executives online at the Data Summit on March 9th. Register here.


This article was contributed by Slater Victoroff, founder and CTO of Indico Data.

There is a belief, built out of science fiction and a healthy fear of math, that AI is some infallible judge of objective truth. We tell ourselves that AI algorithms divine truth from data, and that there is no truth higher than the righteous residual of a regression test. For others, the picture is simple: logic is objective, math is logic, AI is math; thus AI is objective.

This is not a benign belief.

And, in fact, nothing could be further from the truth. More than anything, AI is a mirror: something built in the image of humans, built to mimic humans, and thus inherit our flaws. AI models are computer programs written in data. They reflect all the ugliness in that human data, and through the hundreds of random imperfections across the mirror’s surface, add some hidden ugliness of their own.

Joy Buolamwini showed us that, despite the open admission of these challenges in academia, these technologies are being actively adopted and deployed under a fictitious notion of what today’s AI represents. People’s lives are already being upended, and it is important for us to recognize and adopt a more realistic view of this world-changing technology.

Where this belief in objectivity comes from, and why it propagates

Why do so many experts believe that AI is inherently objective?

There is a classic lie within the realm of AI: “there are two types of machine learning — supervised and unsupervised.” Supervised methods require humans to tell the machine what the “correct” answer is: whether the tweet is positive or negative. Unsupervised methods don’t require this. One merely presents the unsupervised method with a large raft of tweets and sets it to work.

Many novices believe that — because the human subjectivity of “correctness” has not corrupted the unsupervised model — it is a machine built of cold, objective logic. When this cold, objective logic doesn’t align with reality, it’s an afterthought. Always one more regularization step, one more momentum term, one more architecture tweak away. It’s merely a matter of finding the correct math, and human subjectivity will reduce to nothing, like some dimensionless constant.

Let me be clear: this is not just wrong, but dangerously wrong. Why, then, has this dangerous notion spread so broadly?

Researchers are, in their estimation, algorithm builders first and foremost. They are musicians plucking on the chorded equations of God. Meanwhile, problems of model bias and objectivity are data problems. No self-respecting researcher would ever muddy their hands by touching a disgusting database. That’s for the data people. They are building models, not for the real world, but for that messianic dataset that will someday arrive to save us all from bias.

It is eminently understandable. Just like everybody else involved in the development of AI, researchers wish to abdicate responsibility for the often horrific behavior of their creations. We see this in academic terms like “self-supervised” learning, which reinforce the notion that researchers play no part in these outcomes.

The AI taught itself this behavior. I swear! Pay no attention to the man behind the keyboard…

The objectivity myth is dangerous

“Unsupervised” learning, or “self-supervised” learning as described in the section above, and as understood by large swaths of the world, does not exist. In practice, when we call a technique “unsupervised,” it may paradoxically involve several orders of magnitude more supervision than a traditional supervised method.

An “unsupervised” technique for Twitter sentiment analysis might, for instance, be trained on a billion tweets, ten thousand meticulously parsed sentences, half a dozen sentiment analysis datasets, and an exhaustive dictionary tagging a human-estimated sentiment for every word in the English language that took over a person-century of effort to build. Also, a Twitter sentiment analysis dataset will still be needed for evaluation. So, long as it is not specifically trained on a Twitter sentiment analysis dataset, it may still be considered “unsupervised,” and thus “objective.”

In practice, it might be more accurate to call self-supervision “opaque supervision.” The goal is to effectively layer in several layers of indirection such that the instructions provided to the machine are no longer transparent. When bad behavior is learned from bad data, the data can be corrected. When the bad behavior comes from Person A, for example, believing that three is a better value for k than four, nobody will ever know, and no corrective action will be taken.

The problem is that, when researchers abdicate responsibility, nobody is there to pick it up.

In most of these cases, we simply don’t have the data needed to even appropriately evaluate the bias of our models. One reason that I believe Joy Buolamwini has focused on facial recognition to date is that it lends itself more cleanly to notions of equity that would be difficult to establish for other tasks. We can vary the skin tone of a face and say that facial recognition ought to perform the same across those skin tones. For something like a modern question-answer model, it’s much harder to understand what an appropriate answer to a controversial question might be.

There is no replacement for supervision. There is no path where humans are not forced to make decisions about what is correct and what is incorrect. Any belief that rigorous testing and problem definition can be avoided is dangerous. These approaches don’t avoid or mitigate bias. They’re no more objective than the Redditors they emulate. They simply allow us to push that bias into subtle, poorly understood crevices of the system.

How should we look at AI and model bias?

AI is technology. Just like computers and steel and steam engines, it can be a tool of empowerment, or it can bind us in digital shackles. Modern AI can mimic human language, vision, and cognition to an unprecedented degree. In doing so, it presents a unique ability to understand our own foibles. We can take our bias and boil it down to bits and bytes. We’re able to give names and numbers to billions of human experiences.

This generation of AI has repeatedly, and embarrassingly, highlighted our fallibility. We are now presented with two options: we can measure and test and push and fight until we get better. Or we can immortalize our ignorance and bias in model weights, hiding under a false cloak of objectivity.

When I started Indico Data with Diana and Madison, we placed transparency and responsibility at the core of our corporate values. We also push our customers to do the same. To have those difficult conversations, to define a consistent truth in the world that they can be proud of. From there, the key to eliminating the bias is in the testing. Test your outcomes for any flaws in objectivity, before production, then test again so you are sure not to fail when you are already in production. 

The path forward

It is important to note that obscurity is not a replacement for responsibility. Additionally, hiding human biases in model biases does not eliminate them, nor does it magically make these biases objective.

AI researchers have made astonishing progress. Problems considered unsolvable just a few years ago have transformed into “Hello World” tutorials.

Today’s AI is an incredible, unprecedented mimic of human behavior. The question now is whether humans can set an example worth following.

Can you?

Slater Victoroff is founder and CTO of Indico Data.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Slater Victoroff
Source: Venturebeat

Related posts
DefenseNews

F-35s to cost $2 trillion as Pentagon plans longer use, says watchdog

DefenseNews

Lockheed chosen to build new homeland missile defense interceptor

DefenseNews

Get the US Navy’s frigate program back on schedule

Cleantech & EV'sNews

Ford’s BlueCruise hands-free driver assist under investigation after two fatal crashes

Sign up for our Newsletter and
stay informed!