AI & RoboticsNews

When AI flags the ruler, not the tumor — and other arguments for abolishing the black box (VB Live)

AI helps health care experts do their jobs efficiently and effectively, but it needs to be used responsibly, ethically, and equitably. In this VB Live event, get an in-depth perspective on the strengths and limitations of data, AI methodology and more.

Hear more from Brian Christian during our VB Live event on March 31.

Register here for free.


One of the big issues that exists within AI generally, but is particularly acute in health care settings, is the issue of transparency. AI models — for example, deep neural networks — have a reputation for being black boxes. That’s particularly concerning in a medical setting, where caregivers and patients alike need to understand why recommendations are being made.

“That’s both because it’s integral to the trust in the doctor-patient relationship, but also as a sanity check, to make sure these models are, in fact, learning things they’re supposed to be learning and functioning the way we would expect,” says Brian Christian, author of The Alignment Problem, Algorithms to Live By and The Most Human Human.

He points to the example of the neural network that famously had reached a level of accuracy comparable to human dermatologists at diagnosing malignant skin lesions. However, a closer examination of the model’s saliency methods revealed that the single most influential thing this model was looking for in a picture of someone’s skin was the presence of a ruler. Because medical images of cancerous lesions include a ruler for scale, the model learned to identify the presence of a ruler as a marker of malignancy, because that’s much easier than telling the difference between different kinds of lesions.

“It’s precisely this kind of thing which explains remarkable accuracy in a test setting, but is completely useless in the real world, because patients don’t come with rulers helpfully pre-attached when [a tumor] is malignant,” Christian says. “That’s a perfect example, and it’s one of many for why transparency is essential in this setting in particular.”

At a conceptual level, one of the biggest issues in all machine learning is that there’s almost always a gap between the thing that you can readily measure and the thing you actually care about.

He points to the model developed in the 1990s by a group of researchers in Pittsburgh to estimate the severity of patients with pneumonia to triage inpatient vs outpatient treatment. One thing this model learned was that, on average, people with asthma who come in with pneumonia have better health outcomes as a group than non-asthmatics. However, this wasn’t because having asthma is the great health bonus it was flagged as, but because patients with asthma get higher priority care, and also asthma patients are on high alert to go to their doctor as soon as they start to have pulmonary symptoms.

“If all you measure is patient mortality, the asthmatics look like they come out ahead,” he says. “But if you measure things like cost, or days in hospital, or comorbidities, you would notice that maybe they have better mortality, but there’s a lot more going on. They’re survivors, but they’re high-risk survivors, and that becomes clear when you start expanding the scope of what your model is predicting.”

The Pittsburgh team was using a rule-based model, which enabled them to see this asthma connection and immediately flag it. They were able to share that the model had learned a possibly bogus correlation with the doctors participating in the project. But if it had simply been a giant neural network, they might not have known that this problematic association had been learned.

One of the researchers on that project in the 1990s, Rich Caruana from Microsoft, went back 20 years later with a modern set of tools and examined the neural network he helped developed and found a number of equally terrifying associations, such as thinking that being over 100 was good for you, or having high blood pressure was a benefit. All for the same reason — that those people were given higher-priority care.

“Looking back, Caruana says thank God we didn’t use this neural net on patients,” Christian says. “That was the fear he had at the time, and it turns out, 20 years later, to have been fully justified. That all speaks to the importance of having transparent models.”

Algorithms that aren’t transparent, or that are biased, have resulted in a variety of horror stories, which have led to some saying these systems have no place in health care, but that’s a bridge too far, Christian says. There’s an enormous body of evidence that shows that when done properly, these models are an enormous asset, and often better than individual expert judgments, as well as providing a host of other advantages.

“On the other hand,” explains Christian, “some are overly enthusiastic about the embrace of technology, who say, let’s take our hands off the wheel, let the algorithms do it, let our computer overlords tell us what to do and let the system run on autopilot. And I think that is also going too far, because of the many examples we’ve discussed. As I say, we want to thread that needle.”

In other words, AI can’t be used blindly. It requires a data-driven process of building provably optimal, transparent models, from data, in an iterative process that pulls together an interdisciplinary team of computer scientists, clinicians, patient advocates, as well as social scientists that are committed to an iterative and inclusive process.

That also includes audits once these systems go into production, since certain correlations may break over time, certain assumptions may no longer hold, and we may learn more — the last thing you want to do is just flip the switch and come back 10 years later.

“For me, a diverse group of stakeholders with different expertise, representing different interests, coming together at the table to do this in a thoughtful, careful way, is the way forward,” he says. “That’s what I feel the most optimistic about in health care.”


Hear more from Brian Christian during our VB Live event, “In Pursuit of Parity: A guide to the responsible use of AI in health care” on March 31.

Register here for free.

Presented by Optum


 You’ll learn:

  • What it means to use advanced analytics “responsibly”
  • Why responsible use is so important in health care as compared to other fields
  • The steps that researchers and organizations are taking today to ensure AI is used responsibly
  • What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians

Speakers:

  • Brian Christian, Author, The Alignment Problem, Algorithms to Live By and The Most Human Human
  • Sanji Fernando, SVP of Artificial Intelligence & Analytics Platforms, Optum
  • Kyle Wiggers, AI Staff Writer, VentureBeat (moderator)


Author: VB Staff
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!