AI & RoboticsNews

How cloud AI infrastructure enables radiotherapy breakthroughs at Elekta

Presented by Microsoft + NVIDIA


Despite a host of challenges, some of the most successful examples of moving innovative AI applications into production come from healthcare. In this VB Spotlight event, learn how organizations in any industry can follow proven practices and leverage cloud-based AI infrastructure to accelerate their AI efforts.

Register to watch free, on-demand.


From pilot to production, AI is a challenge for every industry. But as a highly regulated, high-stakes sector, healthcare faces especially complex obstacles. Cloud-based infrastructure that’s “purpose-built” and optimized for AI has emerged as a key foundation of innovation and operationalization. By leveraging the flexibility of cloud and high-performance computing (HPC), enterprises in every industry are successfully expanding proof of concepts (PoC) and pilots into production workloads.

VB Spotlight brought together Silvain Beriault, AI strategy lead and lead research scientist at Elekta, a top global innovator of precision radiotherapy systems for cancer treatment and John K. Lee, AI platform and infrastructure principal lead at Microsoft Azure. They joined VB Consulting Analyst Joe Maglitta to discuss how cloud-based AI infrastructure has driven improved collaboration and innovation for Elekta’s worldwide R&D efforts aimed at improving and expanding the company’s brain imaging and MR-guided radiotherapy across the globe.

The big three benefits

Elasticity, flexibility and simplicity top the benefits of end-to-end, on-demand, cloud-based infrastructure-as-a-service (IaaS) for AI, according to Lee. 

Because enterprise AI typically begins with a PoC, Lee says, “cloud is a perfect place to start. You can get started with a single credit card. As models become more complex and need for additional compute capacity increases, cloud is the perfect place to scale that job.”  That includes scaling up or increasing the number of GPUs interconnected to a single host to increase the capacity of the server and scaling out or raising the number of host instances to increase the overall system performance.

Cloud’s flexibility lets organizations manage workloads of any size, from enormous enterprise projects to smaller efforts that need less processing power. For any sized effort, purpose-built cloud infrastructure services deliver far faster time-to-value and better TCO and ROI than building on-premises AI architecture from scratch, Lee explains.

As for simplicity, Lee says pre-tested, pre-integrated, pre-optimized hardware and software stacks, platforms, development environments and tools make it easy for enterprises to get started.

COVID accelerates Elekta’s cloud-based AI journey

Elekta is a medical technology company developing image-guided clinical solutions for the management of brain disorders and improved cancer care. When the COVID pandemic forced researchers out of their labs, company leaders saw an opportunity to accelerate and expand efforts to shift AI R&D to the cloud which had begun a few years earlier.

The division’s AI head knew a more robust, accessible cloud-based architecture to improve its array of AI-powered solutions would help Elekta advance its mission of increasing access to healthcare, including under-served countries.

In terms of cost analysis, Elekta also knew it would be difficult to estimate current and future needs in terms of high-performance computing. They considered the cost of maintaining on-prem infrastructure for AI and its limitations. The overall expense and complexity extend far beyond purchasing GPUs and servers, Beriault notes.

“Trying to do that by yourself can get hard quite fast. With a framework like Azure and Azure ML, you get much more than access to GPUs,” he explains. “You get an entire ecosystem for doing AI experiments, documenting your AI experiments, sharing data across different R&D centers. You have a common ML ops tool.”

The pilot was straightforward: automating the contouring of organs in MRI images to accelerate the task of delineating the treatment target, as well as organs at risk to spare from radiation exposure.

The ability to scale up and down was crucial for the project. In the past, “there were times where we would launch as much as ten training experiments in parallel to do some hyper-parameter tunings of our model,” Beriault recalls. “Other times, we were just waiting for data curation to be ready, so we wouldn’t train at all. This flexibility was very important for us, given that we were, at the time, quite a small team.”

Since the company already used the Azure framework, they turned to Azure ML for their infrastructure, as well as crucial support as teams learned to use the platform portal and APIs to begin launching jobs in the cloud. Microsoft worked with the team to build a data infrastructure very specific to the company’s domain and dealt with crucial data security and privacy issues.

“As of today, we’ve expanded on auto-contouring, all using cloud-based systems. Using this infrastructure has allowed us to expand our research activities to more than 100 organs for multiple tumor sites. What’s more, scaling has allowed us to expand to other more complex AI research in RT beyond simple segmentation, increasing the potential to positively impact patient treatments in the future.”

Choosing the right infrastructure partner

In the end, Beriault says adopting cloud-based architecture lets researchers focus on their work and develop the best possible AI models instead of building and “babysitting” AI infrastructure.

Choosing a partner who can provide that kind of service is crucial, Lee commented. A strong provider must bring strong strategic partnership that helps keep its products and services on the cutting edge. He says Microsoft’s collaboration with NVIDIA to develop foundations for enterprise AI would be critical for customers like Elekta. But there are other considerations, he adds.

“You should be reminding yourself, it’s not just about the product offerings or infrastructure. Do they have the whole ecosystem? Do they have the community? Do they have the right people to help you?”

Register to watch on-demand now!

Agenda

  • First-hand experience and advice about the best ways to accelerate development, testing, deployment and operation of AI models and services
  • The crucial role AI infrastructure plays in moving from POCs and pilots and into production workloads and applications
  • How a cloud-based, “AI-first approach” and front-line-proven best practices can help your organization, regardless of industry, more quickly and effectively scale AI across departments or the world

Speakers

  • Silvain Beriault, AI Strategy Lead and Lead Research Scientist, Elekta
  • John K. Lee, AI Platform & Infrastructure Principal Lead, Microsoft Azure
  • Joe Maglitta, Host and Moderator, VentureBeat


Author: VB Staff
Source: Venturebeat

Related posts
DefenseNews

Lockheed to supply Australia with air battle management system

DefenseNews

First upgraded F-35s won’t be ready for combat until next year

DefenseNews

US Army faces uphill battle to fix aviation mishap crisis

Cleantech & EV'sNews

GreenPower just launched this versatile electric utility truck

Sign up for our Newsletter and
stay informed!