AI & RoboticsNews

Do we have enough GPUs to manifest AI’s potential?

In 2023, few technologies have garnered as much attention, speculation and promise as AI. We are undoubtedly in the midst of an unprecedented AI hype cycle. 

In some ways, the moment is akin to a modern-day gold rush as innovators, investors and entrepreneurs clamor to capitalize on the technology’s promise and potential. 

Like California’s 19th-century gold rush, today’s frenzy has produced two types of entrepreneurs. Some are working hard to leverage AI to pursue the often elusive “next big thing” in tech. Others are selling proverbial picks and shovels. 

With this demand for advanced AI is an insatiable appetite for Graphics Processing Units (GPUs) that fuel the technology. Nvidia is an undisputed leader in this area, having recently exceeded Wall Street projections and pushing its valuation above $1 trillion.

Yet at the same time, there is a limited supply of GPUs, threatening to dampen AI’s impact just as its real-world potential reaches a fever pitch. 

Once largely popular among videogame players and computer hobbyists, GPUs saw surging demand during the pandemic as cryptocurrencies like Bitcoin became popular. These digital currencies require substantial computational power, and GPUs are well-suited for the task. As the value of cryptocurrencies surged, many people started mining them, creating a massive demand for GPUs.

Supply was further constrained by opportunistic businesses including scalpers, which often employ automated bots to rapidly purchase GPUs.

According to Goldman Sachs, the pandemic’s global GPU shortage impacted 169 industries.  

Now, the rise of large-scale deep learning projects and AI applications is pushing demand to a fever pitch. 

But the current production and availability of GPUs is insufficient to manifest AI’s ever-evolving potential. Many businesses face challenges in obtaining the necessary hardware for their operations, dampening their capacity for innovation.  

As manufacturers continue ramping up GPU unit production, many companies are already being hobbled by GPU accessibility.

According to Fortune, OpenAI CEO Sam Altman privately acknowledged that GPU supply constraints were impacting the company’s business. 

In a Congressional hearing, Altman asserted that products would be better if fewer people used them because technology shortages slow performance. 

The Wall Street Journal reports that AI founders and entrepreneurs are “begging sales people at Amazon and Microsoft for more power.” This has prompted some companies to purchase immense amounts of cloud computing capacity to reserve for future opportunities. 

Enterprises can’t wait for manufacturing techniques and supply chains to catch up with surging demand. However, they can adapt their approach to reduce chip demand and maximize innovation opportunities. Here’s how. 

Not every problem requires AI, and its accompanying GPU-hungry computing capacity. 

For example, companies can leverage other computing solutions for things like data preprocessing and featuring engineering. CPU-based machines can efficiently handle data preprocessing tasks such as data cleaning, feature scaling and feature extraction. 

These tasks are often performed before training a model and can be executed on CPUs without significant computational overhead.

At the same time, predictive maintenance, a common use case for AI where algorithms analyze sensor data to predict equipment failures, can be managed by less-capable computing solutions. 

Not all equipment or systems require advanced AI models for accurate predictions. In some cases, simpler statistical or rule-based approaches may be sufficient to identify maintenance needs, reducing the need for complex AI implementations.

Similarly, AI-powered image and video analysis techniques have gained significant attention, but not all applications require AI for accurate results. Tasks like simple image categorization or basic object recognition can often be achieved with traditional computer vision techniques and algorithms without the need for complex deep-learning models.

Finally, while AI can provide advanced analytics capabilities, companies sometimes rush to adopt AI-driven analytics platforms without carefully assessing their existing data infrastructure and needs. In some cases, traditional business intelligence tools or simpler statistical methods might be sufficient to derive insights from data without the need for AI complexity. 

More efficient AI algorithms could reduce the processing power required for AI applications, making GPUs less necessary.

For instance, transfer learning, which allows leveraging pre-trained models for specific tasks,  can be fine-tuned on CPU-based machines for specific applications, even if they were originally trained on GPUs. This approach can be particularly useful for scenarios with limited computational resources.

Support vector machines (SVMs) and Naive Bayes classifiers are other powerful machine learning (ML) algorithms that can be used for classification and regression tasks. SVMs and Naive Bayes classifiers can be trained on a CPU and do not require a GPU.

Exploring alternative hardware to power AI applications presents a viable route for organizations striving for efficient processing. Depending on the specific AI workload requirements, CPUs, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) may be excellent alternatives.

FPGAs, which are known for their customizable nature, and ASICs, specifically designed for a particular use case, both have the potential to effectively handle AI tasks. However, it’s crucial to note that these alternatives might exhibit different performance characteristics and trade-offs.

For instance, while FPGAs offer flexibility and r-programmability, they may not provide the raw computational power of GPUs. Similarly, while delivering high performance, ASICs lack the flexibility of FPGAs or GPUs. Therefore, a careful evaluation is essential before choosing the right hardware for specific AI tasks.

Moreover, outsourcing GPU processing to cloud or computing providers is another plausible solution for companies seeking efficient and scalable AI computation. 

GPUs aren’t the only solution for high-performance computing. Depending on the specific AI workload, companies can explore alternative hardware accelerators that can deliver comparable results even when GPU hardware is scarce. 

The incredible growth of AI and its associated technologies like deep learning, along with the surge in gaming, content creation and cryptocurrency mining, has created a profound GPU shortage that threatens to stall an era of innovation before it truly begins. 

This modern-day Gold Rush towards AI will require companies to adapt to operational realities, becoming more innovative, agile and responsive in the process. In this way, the GPU shortage presents both a challenge and an opportunity. 

Companies willing to adapt will be best positioned to thrive, while those that can’t think outside the box will be stuck mining for gold without a pick and ax.

Ab Gaur is founder and CEO of Verticurl and chief data and technology officer at Ogilvy.

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


In 2023, few technologies have garnered as much attention, speculation and promise as AI. We are undoubtedly in the midst of an unprecedented AI hype cycle. 

In some ways, the moment is akin to a modern-day gold rush as innovators, investors and entrepreneurs clamor to capitalize on the technology’s promise and potential. 

Like California’s 19th-century gold rush, today’s frenzy has produced two types of entrepreneurs. Some are working hard to leverage AI to pursue the often elusive “next big thing” in tech. Others are selling proverbial picks and shovels. 

Accelerating GPU demand among limited supply

With this demand for advanced AI is an insatiable appetite for Graphics Processing Units (GPUs) that fuel the technology. Nvidia is an undisputed leader in this area, having recently exceeded Wall Street projections and pushing its valuation above $1 trillion.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

Yet at the same time, there is a limited supply of GPUs, threatening to dampen AI’s impact just as its real-world potential reaches a fever pitch. 

Once largely popular among videogame players and computer hobbyists, GPUs saw surging demand during the pandemic as cryptocurrencies like Bitcoin became popular. These digital currencies require substantial computational power, and GPUs are well-suited for the task. As the value of cryptocurrencies surged, many people started mining them, creating a massive demand for GPUs.

Supply was further constrained by opportunistic businesses including scalpers, which often employ automated bots to rapidly purchase GPUs.

According to Goldman Sachs, the pandemic’s global GPU shortage impacted 169 industries.  

Do we have enough GPUs?

Now, the rise of large-scale deep learning projects and AI applications is pushing demand to a fever pitch. 

But the current production and availability of GPUs is insufficient to manifest AI’s ever-evolving potential. Many businesses face challenges in obtaining the necessary hardware for their operations, dampening their capacity for innovation.  

As manufacturers continue ramping up GPU unit production, many companies are already being hobbled by GPU accessibility.

According to Fortune, OpenAI CEO Sam Altman privately acknowledged that GPU supply constraints were impacting the company’s business. 

In a Congressional hearing, Altman asserted that products would be better if fewer people used them because technology shortages slow performance. 

The Wall Street Journal reports that AI founders and entrepreneurs are “begging sales people at Amazon and Microsoft for more power.” This has prompted some companies to purchase immense amounts of cloud computing capacity to reserve for future opportunities. 

How enterprises can adapt

Enterprises can’t wait for manufacturing techniques and supply chains to catch up with surging demand. However, they can adapt their approach to reduce chip demand and maximize innovation opportunities. Here’s how. 

Consider other solutions 

Not every problem requires AI, and its accompanying GPU-hungry computing capacity. 

For example, companies can leverage other computing solutions for things like data preprocessing and featuring engineering. CPU-based machines can efficiently handle data preprocessing tasks such as data cleaning, feature scaling and feature extraction. 

These tasks are often performed before training a model and can be executed on CPUs without significant computational overhead.

At the same time, predictive maintenance, a common use case for AI where algorithms analyze sensor data to predict equipment failures, can be managed by less-capable computing solutions. 

Not all equipment or systems require advanced AI models for accurate predictions. In some cases, simpler statistical or rule-based approaches may be sufficient to identify maintenance needs, reducing the need for complex AI implementations.

Similarly, AI-powered image and video analysis techniques have gained significant attention, but not all applications require AI for accurate results. Tasks like simple image categorization or basic object recognition can often be achieved with traditional computer vision techniques and algorithms without the need for complex deep-learning models.

Finally, while AI can provide advanced analytics capabilities, companies sometimes rush to adopt AI-driven analytics platforms without carefully assessing their existing data infrastructure and needs. In some cases, traditional business intelligence tools or simpler statistical methods might be sufficient to derive insights from data without the need for AI complexity. 

Develop more efficient AI algorithms

More efficient AI algorithms could reduce the processing power required for AI applications, making GPUs less necessary.

For instance, transfer learning, which allows leveraging pre-trained models for specific tasks,  can be fine-tuned on CPU-based machines for specific applications, even if they were originally trained on GPUs. This approach can be particularly useful for scenarios with limited computational resources.

Support vector machines (SVMs) and Naive Bayes classifiers are other powerful machine learning (ML) algorithms that can be used for classification and regression tasks. SVMs and Naive Bayes classifiers can be trained on a CPU and do not require a GPU.

Find alternative ways to power AI applications

Exploring alternative hardware to power AI applications presents a viable route for organizations striving for efficient processing. Depending on the specific AI workload requirements, CPUs, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) may be excellent alternatives.

FPGAs, which are known for their customizable nature, and ASICs, specifically designed for a particular use case, both have the potential to effectively handle AI tasks. However, it’s crucial to note that these alternatives might exhibit different performance characteristics and trade-offs.

For instance, while FPGAs offer flexibility and r-programmability, they may not provide the raw computational power of GPUs. Similarly, while delivering high performance, ASICs lack the flexibility of FPGAs or GPUs. Therefore, a careful evaluation is essential before choosing the right hardware for specific AI tasks.

Moreover, outsourcing GPU processing to cloud or computing providers is another plausible solution for companies seeking efficient and scalable AI computation. 

GPUs aren’t the only solution for high-performance computing. Depending on the specific AI workload, companies can explore alternative hardware accelerators that can deliver comparable results even when GPU hardware is scarce. 

Panning for GPU gold in the stream of AI

The incredible growth of AI and its associated technologies like deep learning, along with the surge in gaming, content creation and cryptocurrency mining, has created a profound GPU shortage that threatens to stall an era of innovation before it truly begins. 

This modern-day Gold Rush towards AI will require companies to adapt to operational realities, becoming more innovative, agile and responsive in the process. In this way, the GPU shortage presents both a challenge and an opportunity. 

Companies willing to adapt will be best positioned to thrive, while those that can’t think outside the box will be stuck mining for gold without a pick and ax.

Ab Gaur is founder and CEO of Verticurl and chief data and technology officer at Ogilvy.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Ab Gaur, Ogilvy
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!