AI & RoboticsNews

AI Weekly: Watson Health and semi-autonomous driving failures show the dangers of overpromising AI

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


This week in AI, reality came knocking — both for AI health tech and semi-autonomous driving systems. IBM agreed to sell the assets from its Watson Health business to investment firm Francisco Partners, following the division’s steeply declined performance. Meanwhile, the Insurance Institute for Highway Safety (IIHS), a nonprofit financed by the insurance industry, announced a new ratings program designed to evaluate how well “partial” automation systems like Tesla’s Autopilot provide protections to prevent misuse.

The twin developments are emblematic of the AI industry’s perennial problem: acknowledging AI’s limitations. Slate’s Jeffrey Funk and Gary Smith do a thorough job of recapping overly optimistic AI predictions in recent years, including Ray Kurzweil’s pronouncement that computers will have “human-level” intelligence and “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy” by 2029.

As any expert will confirm, AI is nowhere close to approaching human-level intelligence — emotional or otherwise. (Kurzweil’s new estimate is 2045.) Similarly, autonomous cars and AI-driven health care haven’t achieved the lofty heights futurists once envisioned they would reach. It’s an important lesson in setting expectations — the future isn’t easy to predict, after all — but also an example of how the pursuit of profit supercharges the hype cycle. Under pressure to show ROI, some health tech and autonomous vehicle companies have collapsed under the weight of their overpromises, as this week’s news shows.

Riding high on the Watson win against Jeopardy! champion Ken Jennings, IBM launched Watson Health in 2015, positioning the suite of AI-powersed services as the future of augmented care. The company’s sales pitch was that Watson Health could analyze reams of medical data — far faster than any human doctor could, ostensibly — to generate insights that improve health outcomes.

IBM reportedly spent $4 billion beefing up its Watson Health division with acquisitions, but the technology proved to be inefficient at best — and harmful at worst. A STAT report found that the platform often gave poor and unsafe cancer treatment advice because Watson Health’s models were trained using erroneous, synthetic medical records rather than real patient data.

Watson Health’s demise can be partially attributed to IBM CEO Arvind Krishna’s shifting priorities, but growing cynicism about AI’s health capabilities no doubt also played a role. Studies have shown that nearly all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. An audit of a UnitedHealth Group algorithm determined that it could underestimate by half the number of Black patients in need of greater care. And a growing body of work suggests that skin cancer-detecting algorithms tend to be less precise when used on Black patients, in part because AI models are trained mostly on images of light-skinned patients.

Semi-autonomous, AI-powered driving systems are coming under similar scrutiny, particularly as automakers ramp up the rollout of products they claim can nearly drive a car themselves. In October 2021, Tesla was ordered to turn over data to the National Highway Traffic Safety Administration as part of an investigation into the company’s cars crashing into parked vehicles. The suspicion was that Tesla’s Autopilot was responsible — either in part or whole — for the dangerous behaviors.

It’s not an unreasonable assumption. Late last year, Tesla rolled out an update to Autopilot with a bug that caused the automatic braking system in Teslas to engage for no apparent reason. This caused cars to rapidly decelerate as they traveled down the highway, putting them at risk of being rear-ended.

Tesla isn’t the only vendor that’s struggled to perfect semi-autonomous car technology. A sobering 2020 study from the American Automobile Association found that most semi-autonomous systems on the market — including those from Kia and BMW — ran into problems an average of every eight miles. For example, when encountering a disabled vehicle, the systems caused a collision 66% of the time.

In 2016, GM was forced to push back the rollout of its Super Cruise feature due to unspecified issues. Ford recently delayed its BlueCruise system in order to “simplify” the tech.

That brings us to this week’s news: the Insurance Institute’s ratings program to evaluate semi-autonomous systems’ safety protections. The group hopes it’ll encourage automakers to create better designs once the first set of ratings, currently in development, are issued this year.

“The way many of these systems operate gives people the impression that they’re capable of doing more than they really are,” Insurance Institute research scientist Alexandra Mueller said in a statement. “But even when drivers understand the limitations of partial automation, their minds can still wander. As humans, it’s harder for us to remain vigilant when we’re watching and waiting for a problem to occur than it is when we’re doing all the driving ourselves.”

That’s all to say that AI — whether self-driving or disease-diagnosing — is fallible, just like the humans who design it. Visions of a Jetsons future might be tantalizing, but when lives are at stake, history has shown that it’s best to be overly cautions.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

Senior Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!