MobileNews

Facebook and NYU use artificial intelligence to make MRI scans four times faster

If you’ve ever had an MRI scan before, you’ll know how unsettling the experience can be. You’re placed in a claustrophobia-inducing tube and asked to stay completely still for up to an hour while unseen hardware whirs, creaks, and thumps around you like a medical poltergeist. New research, though, suggests AI can help with this predicament by making MRI scans four times faster, getting patients in and out of the tube quicker.

The work is a collaborative project called fastMRI between Facebook’s AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to “predict” what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.

“It’s a major stepping stone to incorporating AI into medical imaging,” Nafissa Yakubova, a visiting biomedical AI researcher at FAIR who worked on the project, tells The Verge.

The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect who’s designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.

“The neural net knows about the overall structure of the medical image,” Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. “In some ways what we’re doing is filling in what is unique about this particular patient’s [scan] based on the data.”



The AI software can be incorporated into existing MRI scanners with minimal hassle, say researchers.
Image: FAIR / NYU

The fastMRI team has been working on this problem for years, but today, they are publishing a clinical study in the American Journal of Roentgenology, which they say proves the trustworthiness of their method. The study asked radiologists to make diagnoses based on both traditional MRI scans and AI-enhanced scans of patients’ knees. The study reports that when faced with both traditional and AI scans, doctors made the exact same assessments.

“The key word here on which trust can be based is interchangeability,” says Sodickson. “We’re not looking at some quantitative metric based on image quality. We’re saying that radiologists make the same diagnoses. They find the same problems. They miss nothing.”

This concept is extremely important. Although machine learning models are frequently used to create high-resolution data from low-resolution input, this process can often introduce errors. For example, AI can be used to upscale low-resolution imagery from old video games, but humans have to check the output to make sure it matches the input. And the idea of AI “imagining” an incorrect MRI scan is obviously worrying.

The fastMRI team, though, says this isn’t an issue with their method. For a start, the input data used to create the AI scans completely covers the target area of the body. The machine learning model isn’t guessing what a final scan looks like from just a few puzzle pieces. It has all the pieces it needs, just at a lower resolution. Secondly, the scientists created a check system for the neural network based on the physics of MRI scans. That means at regular intervals during the creation of a scan, the AI system checks that its output data matches what is physically possible for an MRI machine to produce.

A traditional MRI scan created from normal input data, known as k-space data.
GIF: FAIR / NYU
An AI-enhanced MRI scan created from a quarter of normal input data.
GIF: FAIR / NYU

“We don’t just allow the network to create any arbitrary image,” says Sodickson. “We require that any image generated through the process must have been physically realizable as an MRI image. We’re limiting the search space, in a way, making sure that everything is consistent with MRI physics.”

Yakubova says it was this particular insight, which only came about after long discussions between the radiologists and the AI engineers, that enabled the project’s success. “Complementary expertise is key to creating solutions like this,” she says.

The next step, though, is getting the technology into hospitals where it can actually help patients. The fastMRI team is confident this can happen fairly quickly, perhaps in just a matter of years. The training data and model they’ve created are completely open access and can be incorporated into existing MRI scanners without new hardware. And Sodickson says the researchers are already in talks with the companies that produce these scanners.

Karin Shmueli, who heads the MRI research team at University College London and was not involved with this research, told The Verge this would be a key step to move forward.

“The bottleneck in taking something from research into the clinic, is often adoption and implementation by manufacturers,” says Shmueli. She added that work like fastMRI was part of a wider trend incorporating artificial intelligence into medical imaging that was extremely promising. “AI is definitely going to be more in use in the future,” she says.


Author: James Vincent.
Source: Theverge

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!