AI & RoboticsNews

AI is transforming medicine: Here’s how we make sure it works for everyone

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


What if your doctor could instantly test dozens of different treatments to discover the perfect one for your body, your health and your values? In my lab at Stanford University School of Medicine, we are working on artificial intelligence (AI) technology to create a “digital twin”: a virtual representation of you based on your medical history, genetic profile, age, ethnicity, and a host of other factors like whether you smoke and how much you exercise.

If you’re sick, the AI can test out treatment options on this computerized twin, running through countless different scenarios to predict which interventions will be most effective. Instead of choosing a treatment regimen based on what works for the average person, your doctor can develop a plan based on what works for you. And the digital twin continuously learns from your experiences, always incorporating the most up-to-date information on your health.

AI is personalizing medicine, but for which people?

While this futuristic idea may sound impossible, artificial intelligence could make personalized medicine a reality sooner than we think. The potential impact on our health is enormous, but so far, the results have been more promising for some patients than others. Because AI is built by humans using data generated by humans, it is prone to reproducing the same biases and inequalities that already exist in our healthcare system.

In 2019, researchers analyzed an algorithm used by hospitals to determine which patients should be referred to special care programs for people with complex medical needs. In theory, this is exactly the type of AI that can help patients get more targeted care. However, the researchers discovered that as the model was being used, it was significantly less likely to assign Black patients to these programs than their white counterparts with similar health profiles. This biased algorithm not only affected the healthcare received by millions of Americans, but also their trust in the system.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

Getting data, the building block of AI, right

Such a scenario is all too common for underrepresented minorities. The issue isn’t the technology itself. The problem starts much earlier, with the questions we ask and the data we use to train the AI. If we want AI to improve healthcare for everyone, we need to get those things right before we ever start building our models.

First up is the data, which are often skewed toward patients who use the healthcare system the most: white, educated, wealthy, cisgender U.S. citizens. These groups have better access to medical care, so they are overrepresented in health datasets and clinical research trials.

To see the impact this skewed data has, look at skin cancer. AI-driven apps could save lives by analyzing pictures of people’s moles and alerting them to anything they should have checked out by a dermatologist. But these apps are trained on existing catalogs of skin cancer lesions dominated by images from fair-skinned patients, so they don’t work as well for patients with darker skin. The predominance of fair-skinned patients in dermatology has simply been transferred over to the digital realm.

My colleagues and I ran into a similar problem when developing an AI model to predict whether cancer patients undergoing chemotherapy will end up visiting the emergency room. Doctors could use this tool to identify at-risk patients and give them targeted treatment and resources to prevent hospitalization, thereby improving health outcomes and reducing costs. While our AI’s predictions were promisingly accurate, the results were not as reliable for Black patients. Because the patients represented in the data we fed into our model did not include enough Black people, the model could not accurately learn the patterns that matter for this population.

Adding diversity to training models and data teams

It’s clear that we need to train AI systems with more robust data that represent a wider range of patients. We also need to ask the right questions of the data and think carefully about how we frame the problems we are trying to solve. At a panel I moderated at the Women in Data Science (WiDS) annual conference in March, Dr. Jinoos Yazdany of Zuckerberg San Francisco General Hospital gave an example of why framing matters: Without proper context, an AI could come to illogical conclusions like inferring that a visit from the hospital chaplain contributed to a patient’s death (when really, it was the other way around — the chaplain came because the patient was dying).

To understand complex healthcare problems and make sure we are asking the right questions, we need interdisciplinary teams that combine data scientists with medical experts, as well as ethicists and social scientists. During the WiDS panel, my Stanford colleague, Dr. Sylvia Plevritis, explained why her lab is half cancer researchers and half data scientists. “At the end of the day,” she said, “you want to answer a biomedical question or you want to solve a biomedical problem.” We need multiple forms of expertise working together to build powerful tools that can identify skin cancer or predict whether a patient will end up in the hospital.

We also need diversity on research teams and in healthcare leadership to see problems from different angles and bring innovative solutions to the table. Say we are building an AI model to predict which patients are most likely to skip appointments. The working mothers on the team might flip the question on its head and instead ask what factors are most likely to prevent people from making their appointment, like scheduling a session in the middle of after-school pickup time.

Healthcare practitioners are needed in AI development

The last piece of the puzzle is how AI systems are put into practice. Healthcare leaders must be critical consumers of these flashy new technologies and ask how AI will work for all the patients in their care. AI tools need to fit into existing workflows so providers will actually use them (and continue adding data to the models to make them more accurate). Involving healthcare practitioners and patients in the development of AI tools leads to end products that are much more likely to be successfully put to use and have an impact on care and patient outcomes.

Making AI-driven tools work for everyone shouldn’t just be a priority for marginalized groups. Bad data and inaccurate models hurt all of us. During our WiDS panel, Dr. Yazdany discussed an AI program she developed to predict outcomes for patients with rheumatoid arthritis. The model was originally created using data from a more affluent research and teaching hospital. When they added in data from a local hospital that serves a more diverse patient population, it not only improved the AI’s predictions for marginalized patients — it also made the results more accurate for everyone, including patients at the original hospital.

AI will revolutionize medicine by predicting health problems before they happen and identifying the best treatments customized for our individual needs. It’s essential we put the right foundations in place now to make sure AI-driven healthcare works for everyone.

Dr. Tina Hernandez Boussard is an Associate Professor at Stanford University who works in biomedical informatics and the use of AI technology in healthcare. Many of the perspectives in this article came from her panel at this year’s Women in Data Science (WiDS) annual conference.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Tina Hernandez Boussard
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!