AI & RoboticsNews

Responsible AI in healthcare: Addressing biases and equitable outcomes

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


With the rapid growth of healthcare AI, algorithms are often overlooked when it comes to addressing fair and equitable patient care. I recently attended the Conference on Applied AI (CAAI): Responsible AI in Healthcare, hosted by the University of Chicago Booth School of Business. The conference brought together healthcare leaders across many facets of business with a goal of discussing and finding effective ways to mitigate algorithmic bias in healthcare. It takes a diverse group of stakeholders to recognize AI bias and make an impact on ensuring equitable outcomes.

If you’re reading this, it’s likely you may already be familiar with AI bias, which is a positive step forward. If you’ve seen movies like The Social Dilemma or Coded Bias, then you’re off to a good start. If you’ve read articles and papers like Dr. Ziad Obermeyer’s Racial Bias in Healthcare Algorithms, even better. What these resources explain is that algorithms play a major role in recommending what movies we watch, social posts we see and what healthcare services are recommended, among other everyday digital interactions. These algorithms often include biases related to race, gender, socioeconomic, sexual orientation, demographics and more. There has been a significant uptick in interest related to AI bias. For example, the number of data science papers on arXiv’s website mentioning racial bias has doubled between 2019-2021.

We’ve seen interest from researchers and media, but what can we actually do about it in healthcare? How do we put these principles into action?

Before we get into putting these principles into action, let’s address what happens if we don’t.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

The impact of bias in healthcare

Let’s take, for example, a patient that has been dealing with various health issues for quite some time. Their healthcare system has a special program designed to intervene early for people who have high risk for cardiovascular needs. The program has shown great results for the people enrolled. However, the patient hasn’t heard about this. Somehow they weren’t included in the list for outreach, even though other sick patients were notified and enrolled. Eventually, they visit the emergency room, and their heart condition has progressed much further than it otherwise would have.

That’s the experience of being an underserved minority and invisible to whatever approach a health system is using. It doesn’t even have to be AI. One common approach to cardiovascular outreach is to only include men that are 45+ in age and women 55+ in age. If you were excluded because you’re a woman who didn’t make the age cutoff, the result is just the same.

How are we addressing it?

Chris Bevolo’s Joe Public 2030 is a 10-year look into healthcare’s future, informed by leaders at Mayo Clinic, Geisinger, Johns Hopkins Medicine and many more. It doesn’t look promising for addressing healthcare disparities. For about 40% of quality measures, Black and Native people received worse care than white people. Uninsured people had worse care for 62% of quality measures, and access to insurance was much lower among Hispanic and Black people.

“We’re still dealing with some of the same issues we’ve dealt with since the 80s, and we can’t figure them out,” stated Adam Brase, executive director of strategic intelligence at Mayo Clinic. “In the last 10 years, these have only grown as issues, which is increasingly worrisome.”

Why data hasn’t solved the problem of bias in AI

No progress since the 80s? But things have changed so much since then. We’re collecting huge amounts of data. And we know that data never lies, right? No, not quite true. Let’s remember that data isn’t just something on a spreadsheet. It’s a list of examples of how people tried to address their pain or better their care.

As we tangle and torture the spreadsheets, the data does what we ask it to. The problem is what we’re asking the data to do. We may ask the data to help drive volume, grow services or minimize costs. However, unless we’re explicitly asking it to address disparities in care, then it’s not going to do that.

Attending the conference changed how I look at bias in AI, and this is how.

It’s not enough to address bias in algorithms and AI. For us to address healthcare disparities, we have to commit at the very top. The conference brought together technologists, strategists, legal and others. It’s not about technology. So this is a call to fight bias in healthcare, and lean heavily on algorithms in order to help! So what does that look like?

A call to fight bias with the help of algorithms

​​Let’s start by talking about when AI fails and when AI succeeds at organizations overall. MIT and Boston Consulting Group surveyed 2,500 executives who’d worked with AI projects. Overall, 70% of these executives said that their projects had failed. What was the biggest difference between the 70% that failed and the 30% that succeeded?

It’s whether the AI project was supporting an organizational goal. To help clarify that further, here are some project ideas and whether they pass or fail.

  • Purchase the most powerful natural language processing solution.

Fail. Natural language processing can be extremely powerful, but this goal lacks context on how it will help the business.

  • Grow our primary care volume by intelligently allocating at-risk patients.

Pass. There’s a goal which requires technology, but that goal is tied to an overall business objective.

We understand the importance of defining a project’s business objectives, but what were both these goals missing? They’re missing any mention of addressing bias, disparity, and social inequity. As healthcare leaders our overall goals are where we need to start.

Remember that successful projects start with organizational goals, and seek AI solutions to help support them. This gives you a place to start as a healthcare leader. The KPIs you’re defining for your departments could very well include specific goals around increasing access for the underserved. “Grow Volume by x%,” for example, could very well include, “Increase volume from underrepresented minority groups by y%.”

How do you arrive at good metrics to target? It starts with asking the tough questions about your patient population. What’s the breakdown by race and gender versus your surrounding communities? This is a great way to put a number and a size to the healthcare gap that needs to be addressed.

This top-down focus should drive actions such as holding vendors and algorithmic experts accountable to helping with these targets. What we need to further address here, though, is who all of this is for. The patient, your community, your consumers, are those that stand to lose the most in this.

Innovating at the speed of trust

At the conference, Barack Obama’s former chief technology officer, Aneesh Chopra, addressed this directly: “Innovation can happen only at the speed of trust.” That’s a big statement. Most of us in healthcare are already asking for race and ethnicity information. Many of us are now asking for sexual orientation and gender identity information.

Without these data points, addressing bias is extremely difficult. Unfortunately, many people in underserved groups don’t trust healthcare enough to provide that information. I’ll be honest, for most of my life, that included me. I had no idea why I was being asked that information, what would be done with it, or even if it might be used to discriminate against me. So I declined to answer. I wasn’t alone in this. We look at the number of people who’ve identified their race and ethnicity to a hospital. Commonly one in four people don’t.

I spoke with behavioral scientist Becca Nissan from ideas42, and it turns out there’s not much scientific literature on how to address this. So, this is my personal plea: partner with your patients. If someone has experienced prejudice, it’s hard to see any upside in providing the details people have used to discriminate against you.

A partnership is a relationship built on trust. This entails a few steps:

  • Be worth partnering with. There must be a genuine commitment to fight bias and personalize healthcare or asking for data is useless.
  • Tell us what you’ll do. Consumers are tired of the gotchas and spam resulting from sharing their data. Level with them. Be transparent about how you use data. If it’s to personalize the experience or better address healthcare concerns, own that. We’re tired of being surprised by algorithms.
  • Follow through. Trust isn’t really earned until the follow-through happens. Don’t let us down.

Conclusion

If you’re building, launching, or using responsible AI, it’s important to be around others who are doing the same. Here are a few best practices for projects or campaigns that have a human impact:

  • Have a diverse team. Groups that lack diversity tend not to ask whether a model is biased.
  • Collect the right data. Without known values for race and ethnicity, gender, income, sex, sexual preference, and other social determinants of health, there is no way to test and control for fairness.
  • Consider how certain metrics may carry hidden bias. The idea of healthcare spending from the 2019 study demonstrates how problematic this metric might be to certain populations.
  • Measure the target variable’s potential to introduce bias. With any metric, label, or variable, checking its impact and distribution across race, gender, sex and other factors is key.
  • Ensure the methods in use aren’t creating bias for other populations. Teams should design fairness metrics that are applicable across all groups and test continuously against it.
  • Set benchmarks and track progress. After the model has been launched and is in use, continually monitor for changes.
  • Leadership support. You need your leadership to buy in, it can’t just be one person or team.
  • “Responsible AI” isn’t the end, it’s not just about making algorithms fair. This should be part of a broader organizational commitment to fight bias overall.
  • Partner with patients. We should go deeper on how we partner with and involve patients in the process. What can they tell us about how they’d like their data to be used?

As someone who loves the field of data science, I am incredibly optimistic about the future, and the opportunity to drive real impact for healthcare consumers. We have a lot of work ahead of us to ensure that impact is unbiased and available to everyone, but I believe just by having these conversations, we’re on the right path.

Chris Hemphill is VP of applied AI and growth at Actium Health.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Chris Hemphill, Actium Health
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!