AI & RoboticsNews

Concerns linger over AI in health care

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Of all the fields AI is expected to permeate over the coming decade, perhaps none is more consequential than health care. From early diagnostics to robot-assisted surgery, AI is expected to enhance our health in a wide variety of ways.

But it also has the potential to do great harm. The human body is a swirling mass of biological, chemical, and even electrical processes, with structures and physiologies so diverse that no two are exactly alike. More than any other industry, health care should tread carefully when it comes to implementing AI, making certain that negative outcomes be kept to an absolute minimum.

The upside

There is, of course, a lot to look forward to when it comes to allowing bots to participate in our health decisions. For one thing, according to Drexel University’s College of Computing and Informatics, AI can provide practitioners with real-time information and analytics on medical issues, as well as streamline many of the time-consuming tasks that impede the health care process, such as insurance verification and medication history. And it should be able to accomplish this while also reducing the need for an abundance of resources that currently clogs up the health care process and drives up costs.

At the moment, says Fingent team leader Vinod Saratchandran, AI is having the greatest impact on two key medical functions: diagnosis and clinical decision-making. Its chief benefit is the ability to minimize intra- and inter-observer variability to provide greater accuracy and speed. A simple chest X-ray, for example, can be subject to wide interpretation to the human eye, but AI can pinpoint minute details that can confirm one diagnosis over another, or detect anomalies that might otherwise have gone unnoticed. In the future, we can expect AI to make significant contributions in the areas of drug discovery, pandemic tracking and prevention, and even direct primary care.

On the downside, however, is the fact that even AI is not perfect, so mistakes — sometimes tragic ones — are bound to happen. Not only is AI limited by the data it can access when trying to plot a course of action, it is also susceptible to security breaches and has been shown to exhibit the same social biases that humans possess.

Hidden processes

Another potential problem is the fact that the decision-making processes employed by most AI algorithms are opaque at best. The World Health Organization recently cited this lack of transparency as one of the key weaknesses in AI, listing the potential for flawed medical decisions as a top concern. Most AI software is developed by commercial entities that have vested interests in keeping their code secret, says Jason H. Moore, director of the Institute of Biomedical Informatics at the University of Pennsylvania, but this can erode the trust that patients need when it comes to their health care, possibly denting their willingness to choose AI over a human practitioner who can provide clear explanations as to what they want to do and why.

Already, this lack of transparency is causing some embarrassing failures in AI-driven health solutions. In one case, which Mind Matters recently highlighted, Epic Systems, the largest health records company in the U.S., claimed its own tests showed its proprietary intelligent algorithms detected sepsis in hospital patients with up to 83% accuracy. But when the Journal of the American Medical Association analyzed the results of just one hospital, at the University of Michigan, the system failed to detect 67% of sepsis cases. And of the cases the system did identify, 88% turned out to be false positive, producing “alert fatigue” among the medical staff.

The good news is that these issues are not intractable. As algorithms become more refined and both patients and practitioners learn what to expect from AI and what not to expect, the technology’s contribution to the health care system should be significant. When both costs and patient outcomes are less than what people desire, it’s good to know that there is something at the ready that should make substantial improvements to both.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Arthur Cole
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!