AI & RoboticsNews

AI Weekly: Coronavirus chatbots use inconsistent data sources and privacy practices

It’s been widely reported that U.S. hospital systems — particularly in hotspots like New York City, Detroit, Chicago, and New Orleans — are overwhelmed by the influx of patients affected by COVID-19. There’s a nationwide ventilator shortage. Convention centers and public parks have been repurposed as ward overflows. And waiting times at some call and testing centers are averaging multiple hours.

Clearly, there’s a real and present need for triaging solutions that ensure people at risk receive treatment expeditiously. Chatbots have been offered as a solution — tech giants including IBM, Facebook, and Microsoft have championed them as effective informational tools. But problematically, there are disparities in the way these chatbots source and handle data, which could in the worst case lead to inconsistent health outcomes.

We asked six companies that provide COVID-19 chatbot solutions to governments, nonprofits, and health systems — Clearstep, IPsoft, Quiq, Drift, LifeLink, and Orbita — to reveal the sources of their chatbots’ COVID-19 information and their vetting processes, as well as whether they collect and how they handle personally identifiable information (PII).

Quality and sources of information

Unsurprisingly, but nonetheless concerningly, sourcing and vetting processes vary widely among chatbot providers. While some claimed to review data before serving it to their users, others demurred on the question, instead insisting that their chatbots are intended to be used for educational — not diagnostic — purposes.

Clearstep, Orbita, and LifeLink told VentureBeat that they run all of their chatbots’ information by medical professionals.

Clearstep says that it uses data from the Centers for Disease Control and Prevention (CDC) and protocols “trusted by over 90% of the nurse call centers across the country.” The company recruits chief medical and chief medical informatics officers from its customer institutions, as well as internal medicine and emergency medicine physicians in the field, to review content with their clinical review teams and provide actionable feedback.

Orbita also draws on CDC guidelines, and it has agreements in place to provide access to content from trusted partners such as the Mayo Clinic. Its in-house clinical team prioritizes the use of that content, which it claims is vetted by “leading institutions.”

LifeLink, too, aligns its questions, risk algorithms, and care recommendations with those of the CDC, supplemented with an optional question-and-answer module. But it also makes clear that it retains hospital clinical teams to sign off on all of its chatbots’ data.

By contrast, IPsoft says that while its chatbot sources from the CDC in addition to the World Health Organization (WHO), its content isn’t further reviewed by internal or external teams. (To be clear, IPsoft notes the chatbot isn’t intended to offer medical advice or diagnosis but rather to “help users evaluate their own situations using verifiable information from authorized sources.”)

Quiq similarly says that its bot “passively” gives out information that it itself does not vet, using undisclosed approved sources for COVID-19 and local health authority, CDC, and White House materials. “Quiq is the technology platform but the data that informs the bot is provided by the client,” a spokesperson told VentureBeat via email. For instance, in a partnership with Knoxville, the questions and answers are based on the same data provided by the city’s 3-1-1 call centers.

Drift’s chatbot uses CDC guidelines as a template, and it’s customizable based on organizations’ response plans, but it also carries a disclaimer that it isn’t to be used as a substitute for medical advice, diagnoses, or treatment.

Data privacy

The COVID-19 chatbots reviewed are as inconsistent about data handling and collection as they are with sources of information, we found. That said, none appear to be in violation of HIPAA, the U.S. law that establishes standards to protect individual medical records and other personal health information.

Clearstep says that its chatbot doesn’t collect information that would allow it to identify a particular person. Furthermore, all data the chatbot collects is anonymized, and health information is encrypted in transit and at rest and stored in the HIPAA-compliant app hosting platform Healthcare Blocks.

For LifeLink’s part, it says that all of its chatbot conversations occur in a HIPAA-compliant browser session. No PII is collected during screening; the only data retained is symptoms and travel/contact/special population risk. Moderate- and high-risk patients move into clinical intakes for appointments, during which the chatbot collects health information submitted directly to the hospital system via integration with their scheduling systems in preparation for the visit.

IPsoft is a bit vaguer about its data collection and storage practices, but it says that its chatbot doesn’t collect private health information or record conversations or data. Quiq also says that it doesn’t collect personal information or health data. And Drift says that it requires users to opt-in to a self-assessment and agree to clinical terms and conditions.

As for Orbita, it says that its premium chatbot platform — which is HIPAA-complaint — collects personal health information, but that its free chatbot does not.

Challenges ahead

The differences in the various COVID-19 chatbot products deployed publicly are problematic, to say the least. While we examined only a small sampling, our review revealed that few use the same sources of information, vetting processes, or data collection and storage policies. For the average user, who isn’t likely to read the fine print of every chatbot they use, this could result in confusion. A test Stat conducted of eight COVID-19 chatbots found that diagnoses of a common set of symptoms ranged from “low risk” to “start home isolation.”

While companies are normally loath to disclose their internal development processes for competitive reasons, greater transparency around COVID-19 chatbots’ development might help to achieve consistency in the bots’ responses. A collaborative approach, in tandem with disclaimers about the chatbots’ capabilities and limitations, seems the responsible way forward as tens of millions of people seek answers to critical health questions.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
DefenseNews

It’s time to rebalance funding toward the Air Force and Space Force

DefenseNews

House lawmakers aim to cut F-35 buy as patience with delays wears thin

DefenseNews

US Army experiments with long-endurance drones, balloons in Philippines

Cleantech & EV'sNews

There are over 140 'used' Tesla Cybertrucks for sale

Sign up for our Newsletter and
stay informed!

Worth reading...
Samsung Galaxy A51 5G right around the corner, gets Wi-Fi certification