AI & RoboticsNews

‘Do more with less’: Why public cloud services are key for AI and HPC in an uncertain 2023

This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia.

Don’t miss additional articles in this series providing new industry insights, trends and analysis on how AI is transforming organizations. Find them all here.  


Amidst widespread uncertainty, enterprises in 2023 face new pressures to profitably innovate and improve sustainability and resilience, for less money. For organizations of all sizes and across industries, cautious C-suites — concerned with recession, inflation, valuations, fiscal policy, energy costs, pandemic, supply chains, war and other political issues — have made “do more with less” the order of the day.

After two years of heavy investment, many businesses are reducing capital spending on technology and taking a closer look at IT outlays and ROI. Yet unlike many past periods of belt-tightening, the current uneasiness has not yet led to widespread, across-the-board cuts to technology budgets.

Public cloud and AI infrastructure services top budget items

To the contrary, recent industry surveys and forecasts clearly indicate a strong willingness by enterprise leaders to continue funding — and even accelerate — strategic optimization and transformation. That’s especially true for strategic AI, sustainability, resiliency, and innovation initiatives using public clouds and services to support critical workloads like drug discovery and real-time fraud detection.

Gartner predicts worldwide spending on public cloud services will reach nearly $600 billion in 2023, up more than 20% year over year. Infrastructure as a Service (IaaS) is expected to be the fastest-growing segment, with investments increasing nearly 30% – to $150 billion. It’s followed by Platform as a Service (PaaS), at 23%, to $136 billion.

“Current inflationary pressures and macroeconomic conditions are having a push-and-pull effect on cloud spending,” writes Sid Nag, Vice President Analyst at Gartner. “Cloud computing will continue to be a bastion of safety and innovation, supporting growth during uncertain times due to its agile, elastic and scalable nature.” The firm forecasts a continued decline in spending growth of traditional (on-premises) technology though 2025, when it’s eclipsed by cloud (Figure 1). Other researchers see similar growth in related areas, including AI infrastructure (Figure 2).

Global spending on cloud technology is expected to surpass traditional on-premise investments in 2025.

     

Omar Khan, General Manager of Microsoft Azure, says savvy enterprise budgeters continue to show a strong strategic belief in public cloud benefits and economics, most notably elasticity for volatile market conditions and reduced costs for IT overhead and management, along with a more sophisticated appreciation for newer “multi-dimensional” capabilities such as accelerated AI processing..

Why public cloud makes business sense now

Leveraging public clouds to cost-effectively advance strategic business and technology initiatives makes good historical, present and future sense, says Khan. Today’s cloud services build on proven economics, deliver new capabilities for current corporate imperatives, and provide a flexible and reusable foundation for tomorrow. That’s especially true for cloud infrastructure and for scaling AI and HPC into production, and here’s why:

1. Public cloud infrastructure and services deliver superior economics

In the decade or so since cloud began to gain traction, it’s become clear: cloud provides much more favorable economics than on-premise.

An in-depth 2022 analysis by IDC sponsored by Microsoft found a wide range of dramatic financial and business benefits from modernizing and migrating with public cloud. That included a 37% drop in operation costs and 391% ROI in three years, and $139 million higher revenue per-year per organization.

While not AI-specific, such dramatic results should impress even the most tight-fisted CFOs and technology committees.  Compare that to a recent survey that found only 17% of respondents reporting high utilization of hardware, software and cloud resources worth millions — much of it for AI.

Khan says when making the case, avoid simplistic A-to-B cost workload comparisons. Instead, he advises focusing on the important number: TCO (total cost of ownership). Dave Salvator, Director of Product Marketing at Nvidia’s Accelerated Computing Group, notes that processing AI models on powerful time-metered systems saves money because it’s faster and thus less costly. Low utilization of IT resources, he adds, means that organizations are sitting on unused capacity and show far better ROI and TCO by right-sizing in the cloud and using only what the need.

2. Purpose-built cloud infrastructure and supercomputers meet the demanding requirements of AI

Infrastructure is increasingly understood as a fatal choke point for AI initiatives. Peter Rutten, IDC research vice president and global research lead on Performance Intensive Computing Solutions says: “[Our] research consistently shows that inadequate or lack of purpose-built infrastructure capabilities are often the cause of AI projects failing.” He concludes: “AI infrastructure remains one of the most consequential but the least mature of infrastructure decisions that organizations make as part of their future enterprise.”

The reasons, while complex, boil down to this: Performance requirements for AI and HPC are radically different from other enterprise applications. Unlike many conventional cloud workloads, increasingly sophisticated and huge AI models with billions of parameters need massive amounts of processing power plus lightning-fast networking and storage at every stage for real-time applications ranging from natural language processing (NLP) and robotic process automation (RPA) to machine learning and deep learning, computer vision and others.

“Acceleration is really the only way to handle a lot of these cutting-edge workloads. It’s table stakes,” explains Nvidia’s Salvator. “Especially for training, because the networks continue to grow massively in terms of size and architectural complexity. The only way to keep up is to train in a reasonable time that’s measured in hours or perhaps days, as opposed to weeks, months, or possibly years.”

These demands have led to the development of innovative new ways to deliver specialized scale-up and scale-out infrastructures that can handle the enormous demands of large language models (LLMs) or transformer models and other fast-evolving approaches in a public cloud environment. These purpose-built architectures integrate advanced tensor-core GPUs and accelerators with software, high-bandwidth, low-latency interconnects and advanced parallel communications methods, interleaving computation and communications across a vast number of compute nodes.

A hopeful sign: A recent IDC survey of more than 2,000 business leaders revealed a growing realization that purpose-built architecture will be crucial for AI success.

3. Public cloud optimization meets a wide range of pressing enterprise needs

In the early days, Microsoft’s Khan notes, much of the benefit from cloud came from optimizing technology spending to meet elasticity needs ­(“Pay only for what you use.”) Today, he says, benefit is still rooted in moving from a fixed to a variable cost model. But, he adds, “more enterprises are realizing the benefits go beyond that” in advancing corporate goals. Consider these examples:

Everseen has developed a proprietary visual AI solution that can video-monitor, analyze and correct major problems in business processes in real time. Rafael Alegre, Chief Operating Officer of the Cork, Ireland solution builder, says the capability helps reduce “shrinkage” (the retail industry term for unaccounted inventory), increase mobile sales and optimize operations in distribution centers.

Mass General Brigham, the Boston-based healthcare partnership, recently deployed a medical imaging service running on an open cloud platform.  The system puts AI-based diagnostic tools into the hands of radiologists and other clinicians at scale for the first time and delivers patient insights from diagnostic imaging into clinical and administrative workflows. For example, a breast density AI model has reduced the results waiting period from several days to just 15 minutes. Women can now talk to a clinician about the results of their scan and discuss next steps before they leave the facility, rather than enduring the stress and anxiety of waiting for the outcome.

4. Energy is a three-pronged concern for enterprises worldwide

Energy prices have skyrocketed, especially in Europe. Power grids in some places have become unstable due to severe weather and natural disasters, overcapacity, terrorist attacks, poor maintenance and more. An influential Microsoft study in 2018 found that using a cloud platform can be nearly twice as energy- and carbon-efficient than on-premises solutions.  New best practices for optimizing energy efficiency on public clouds promise to help enterprises and achieve sustainability goals — even (and especially) in a power environment in flux.

What’s next: Cloud-based AI supercomputing

IDC forecasts that by 2025, nearly 50% of all accelerated infrastructure for performance-intensive computing (including AI and HPC) will be cloud-based.  

To that end, Microsoft and Nvidia announced a multi-year collaboration to build one of the world’s most powerful AI supercomputers. The cloud-based system will help enterprises train, deploy and scale AI, including large, state-of-the-art models, on virtual machines optimized for AI distributed training and inference.

“We’re working together to bring supercomputing and AI to customers who otherwise have a barrier to entry,” explains Khan. “We’re also working to do things like making fractions of GPUs available through the cloud, so customers have access to what was previously very difficult to acquire on their own, so they can leverage the latest innovations in AI. We’re pushing the boundaries of what is possible.”

In the best of times, public cloud services make clear economic sense for enterprise optimization, transformation, sustainability, innovation and AI. In uncertain times, it’s an even smarter move.

Learn more at Make AI Your Reality.

#MakeAIYourReality #AzureHPCAI #NVIDIAonAzure

VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare

AI & RoboticsNews

Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

AI & RoboticsNews

AI21 Labs juices up gen AI transformers with Jamba

DefenseNews

Northrop says Air Force design changes drove higher Sentinel ICBM cost

Sign up for our Newsletter and
stay informed!