AI & RoboticsNews

Salesforce claims its AI can spot signs of breast cancer with 92% accuracy

Salesforce today peeled back the curtains on ReceptorNet, a machine learning system researchers at the company developed in partnership with clinicians at the University of Southern California’s Lawrence J. Ellison Institute for Transformative Medicine of USC. The system, which can determine a critical biomarker for oncologists when deciding on the appropriate treatment for breast cancer patients, achieved 92% accuracy in a study published in the journal Nature Communications.

Breast cancer affects more than 2 million women each year, with around one in eight women in the U.S. developing the disease over the course of their lifetime. In 2018 in the U.S. alone, there were also 2,550 new cases of breast cancer in men. And rates of breast cancer are increasing in nearly every region around the world.

In an effort to address this, Salesforce researchers developed an algorithm — the aforementioned ReceptorNet — that can predict hormone-receptor status from inexpensive and ubiquitous images of tissue. Typically, breast cancer cells extracted during a biopsy or surgery are tested to see if they contain proteins that act as estrogen or progesterone receptors. (When the hormones estrogen and progesterone attach to these receptors, they fuel the cancer growth.) But these types of biopsy images are less widely available and require a pathologist to review.

In contrast to the immunohistochemistry process favored by clinicians, which requires a microscope and tends to be expensive and not readily available in parts of the world, ReceptorNet determines hormone receptor status via hematoxylin and eosin (H&E) staining, which takes into account the shape, size, and structure of cells. Salesforce researchers trained the system on several thousand H&E image slides from cancer patients in “dozens” of hospitals around the world.

Salesforce ReceptorNet

Above: An illustrative interpretation of how AI can spot what the human eye can’t see.

Image Credit: Salesforce

Research has shown that much of the data used to train algorithms for diagnosing diseases may perpetuate inequalities. Recently, a team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers identified most of the U.S. data for studies involving medical uses of AI as coming from California, New York, and Massachusetts.

But Salesforce says that when it analyzed ReceptorNet for signs of bias along age, race, and geographic vectors, it found that there was statically no difference in its performance. They also say it delivered accurate predictions regardless of differences in the preparation of tissue samples it analyzed.

Salesforce believes systems like ReceptorNet could, if deployed clinically, help to reduce the cost of care and time it takes to begin breast cancer treatment while improving accuracy and delivering better health outcomes for patients. In the short term, ReceptorNet lays the foundation for future studies comparing the clinical workflow of pathologists with and without this type of AI, which might help to better reveal its potential.

Beyond Salesforce, a number of tech giants have invested in — and been criticized for — AI that can ostensibly diagnose cancer as reliably as oncologists can. Back in January, Google Health, the branch of Google focused on health-related research, clinical tools, and partnerships for health care services, released an AI model trained on over 90,000 mammogram X-rays that the company said achieved better results than human radiologists. Google claimed that the algorithm could recognize more false negatives — the kind of images that look normal but contain breast cancer — than previous work, but some clinicians, data scientists, and engineers take issue with that statement. In a rebuttal, coauthors said that the lack of detailed methods and code in Google’s research “undermines its scientific value.”


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!