AI & RoboticsNews

MIT researchers say their AI model can identify asymptomatic COVID-19 carriers

Researchers at MIT say they’ve developed an algorithm that can diagnose COVID-19 by the sound of someone’s cough, even if that person is asymptomatic. In a paper published in the IEEE Journal of Engineering in Medicine and Biology, the team reports that their approach distinguishes between infected and healthy individuals through “forced-cough” recordings contributed via smartphones, laptops, and other mobile devices.

Applying AI to discern the cause of a cough isn’t a new idea. Last year, a group of Australian researchers developed a smartphone app that could ostensibly identify respiratory disorders like pneumonia and bronchitis by “listening” to a person’s exhalations. The potential for bias exists in these systems — algorithms trained on imbalanced or unrepresentative datasets can lead to worse health outcomes for certain user groups — but studies suggest they could be a useful tool on the front lines of the coronavirus pandemic.

The MIT researchers, who had been developing a model to detect signs of Alzheimer’s from coughs, trained their system on tens of thousands of samples of coughs as well as spoken words. Prior research suggests the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are, and so the team trained a model on an audiobook dataset with more than 1,000 hours of speech to pick out the word “them” from words like “the” and “then.” They then trained a second model to distinguish emotions in speech on a dataset of actors intonating emotional states such as neutral, calm, happy, and sad. And they trained a third model on a database of coughs in order to discern changes in lung and respiratory performance.

The coughs came from a website launched in April that allowed people to record a series of coughs and fill out a survey, which asked things like which symptoms they were experiencing, whether they had COVID-19, and whether they were diagnosed through an official test. It also asked contributors to note any relevant demographic information including their gender, geographical location, and native language.

The researchers collected more than 70,000 recordings amounting to some 200,000 forced-cough audio samples. (Around 2,500 recordings were submitted by people who were confirmed to have COVID-19, including those who were asymptomatic.) A portion of these — 2,500 COVID-19-associated recordings, along with 2,500 recordings randomly selected from the collection to balance the dataset — were used to train the third model.

After combining the model trained on the audiobook snippets, the emotional state detector, and the cough classifier into one, the team tested the ensemble on 1,000 recordings from the cough dataset. They claim it managed to identify 98.5% of coughs from people confirmed with COVID-19 and accurately detect all of the asymptomatic coughs.

The MIT researchers stress that the model isn’t meant to diagnose symptomatic people. Rather, they hope to use it to develop a free prescreening app based on their AI model, and they say they’re partnering with several hospitals to collect larger, more diverse sets of cough recordings to train and strengthen the model’s accuracy.


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how



Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!