AI & RoboticsNews

How moving AI to the edge can help solve the data center energy crisis

One of the least-discussed topics of the information age is the real-world cost of all the data we generate and consume. Our nomenclature for storing data doesn’t help — the “cloud” sounds wispy and ethereal, and the average user’s interactions with it are designed to be fast, easy, seamless and almost insubstantial.

Our mental picture is often that of a bunch of zeroes and ones floating above and around us, somewhere in cyberspace, untethered to our world, whose forms we can only make out and manipulate through the layers of glass and metal on our mobile device touchscreens and computer keyboards, like the flickering shadows on the walls of Plato’s proverbial cave.

But of course, there is a very real, tangible, physical toll to the cloud: the energy required to run the servers on which the data is stored and applications are run, and the greenhouse gases produced as a result.

On average, the “hyperscale” data centers used by large tech companies such as Google, Meta, Apple, and Amazon consume between 20 to 100 megawatts of electricity annually, enough to power up to 37,000 homes. Though tech companies are proud to crow about their investments in solar, wind, hydro and other renewables for powering their data centers, the reality is data centers, like most of the rest of the world, are still reliant on fossil fuels.

As data centers’ energy appetites grow, with projections indicating a leap from 3% to 4% of total global electricity consumption by 2030, companies must find alternatives.

One path that has emerged is that of increased investments in edge computing — that is, deploying smaller-scale computers, sensors, and servers not in a massive dedicated data center somewhere, but out in the field, on the floors of factories and retail outlets where work is being done and business is being physically transacted.

At the same time, the sudden burst of interest from enterprises in using generative AI has increased demands for graphical processing units (GPUs) and for the server space necessary to store the vast volumes of data necessary for training large language models (LLMs) and other foundational models. In some ways, this is an unhelpful trend for energy consumption of databases and data centers, as it acts as a countervailing force towards the move towards lower-power-edged devices.

Or does it? Several companies have begun offering “AI on the edge” compute and software solutions, looking to provide organizations with the technology necessary for running AI applications out in the field, taking some of the energy demands away from the cloud and reducing the overall energy needs, and therefore, emissions.

The crux of edge computing’s allure lies in its capacity to mitigate the energy challenges posed by the digital transformation wave sweeping across the globe.

By reducing the amount of data transmitted over networks to central data centers for processing, edge computing minimizes consumption. In addition, most edge devices have far lower power than their datacenter or centralized compute counterparts.

The localized processing approach also means data is handled closer to where it is generated or needed, reducing latency and saving energy. The transition to edge computing is more than a mere technical shift; it’s a significant stride towards a more sustainable and energy-efficient computing landscape.

“AI at the edge is set to revolutionize enterprises by enhancing efficiency, enabling real-time
decision-making, and fostering innovation,” wrote Krishna Rangasayee, CEO and founder of SiMa.ai, in an email to VentureBeat.

Rangasayee would know as SiMa.ai, a five-year-old startup based in San Diego, California, makes its own drag-and-drop, no-code AI app software and AI edge device chips.

In September 2023, SiMa introduced Palette Edgematic, a platform allowing enterprises to rapidly and easily build and deploy AI applications on edge devices, specifically those leveraging SiMa’s MLSoC silicon chips (manufactured to spec by leading supplier Taiwan Semiconductor, TMSC). Already, the company has proven its worth to such important clientele as the U.S. military, showing one edge deployment on a drone was able to boost video capture and analysis from 3-frames-per-second up to 60.

“We knew what worked for AI and ML in the cloud would be rendered useless at the
edge, so we set out to exceed the performance of the cloud and adhere to the power constraints
of the edge,” Rangasayee said.

Another company pursuing AI at the edge to reduce power requirements while still leveraging the analytical power of AI is Lenovo.

Though known best to consumers as a PC and device-maker, Lenovo’s new TruScale for Edge and AI service, which also debuted in September 2023, takes Lenovo’s hardware experience and puts it toward a new form factor — the ThinkEdge SE455 V3 server with AMD’s EPYC 8004 series processors, designed to run quietly in the back office of a retail outlet, grocery store, or even on a commercial fishing boat in the middle of the Atlantic Ocean.

Lenovo is also supplying software, namely 150+ turnkey AI solutions, through its new TruScale for Edge and AI subscription SaaS offering.

“Phones, tablets, laptops, cameras and sensors everywhere will double the world’s data over the next few years, making computing at the edge, or remote locations, critical to delivering on the promise of AI for all businesses,” said Scott Tease, General Manager of HPC and AI at Lenovo. “Across Lenovo, we are focused on bringing AI to the data through next-generation edge-to-cloud solutions.”

According to Lenovo’s estimates, fully “75% of compute” — the actual hardware/software mix needed to run applications — is poised to move toward the edge.

But acknowledging this trend is coming is one thing. It’s another, more challenging set of tasks entirely to create the infrastructure to make it happen.

“The server technology needs to be able to withstand the environment, be compact and nonobstrusive while delivering advanced computing capable of delivering AI-powered insights,” Tease said.

Splunk, the enterprise data software firm that was recently acquired by Cisco for a staggering $28 billion, differentiates between “thick edge” and “thin edge,” and helps its customers differentiate between these two categories of compute — and identify which is right for them.

While the terminology is still new and evolving, “thick edge” refers to the kind of computing hardware/software solutions Lenovo mentioned above in this piece — those where the data is processed and analyzed on-site, or close to where it is collected.

“Thin edge,” is deployments where smaller, lower-powered sensors and computing hardware is installed to collect data, but only minimal operations are run at the site of the collection, and most of the processing power occurs back up in the cloud. Splunk’s new Edge Hub, an edge computing terminal with its own OS debuted by the company in July, is designed specifically for these type of deployments.

“Running Splunk Enterprise On-Premise is commonly mentioned as the ‘thick edge’ because the compute power typically provided is powerful enough to run several of Splunk’s AI offerings today,” said Hao Yang, Head of AI at Splunk, in an email provided to VentureBeat. “Splunk is also a leader invested in AI on the ‘thin edge’ with our new Splunk Edge Hub. This allows for AI models to be applied for use cases that need to run on tighter resources closer to the data source.”

Both cases offer opportunities for enterprises to reduce the energy consumption of their data gathering and processing, but clearly, by virtue of the way it is construed and architected, “thick edge” offers far more potential power savings.

Regardless, Splunk is ready to support enterprises in their thick and thin edge deployments and to make the most of them in an energy-efficient way, even as they look to embrace compute resource-intensive AI models.

“For large models that can effortlessly run in the cloud, an effective strategy includes quantization, so that the leading foundational AI models with trillions of parameters can be optimized to run on an edge device while maintaining accuracy,” explained Yang. “This also highlights the need to understand how hardware can be optimized for AI and how to adapt a model to take advantage of varying hardware architecture in GPUs (graphics processing unit) and NPUs.”

One important tenet to Splunk’s philosophy around AI is that of “human-in-the-loop.”

As Splunk CEO Gary Steele told The Wall Street Journal in a recent interview: “You are not just going to let an AI agent reconfigure your network. You are going to be really super-thoughtful about the next steps that you take.”

Instead, Splunk’s systems allow enterprises to deploy AI that makes recommendations but ultimately keeps humans in charge of making decisions. This is especially critical for edge deployments, where, power savings aside, the AI app has the chance to more directly impact the workplace since it is situated in and among it.

Splunk also wants to ensure that enterprises are prepared to come in with their own unique data to refine the AI apps they plan to use, as doing so will be critical to the ultimate success of an AI at the edge deployments.

“Many attempts at deploying AI fall short because base models need to be refined with unique data,” Wang told VentureBeat. “Every enterprise is different and Splunk Edge Hub provides that ability to gather data from the Edge and ensure AI will meet the job it is set out to do. This speaks to Splunk’s value in the Human-in-the-loop approach, and making sure that to properly deploy AI, it can be understood and adjusted.”

Despite regulatory ambiguity and vocal pushback from creatives and advocates, the rush among enterprises to adopt AI shows no signs of slowing down.

This will push more companies to run power-intensive AI models, which could increase the total energy consumption from enterprises meaningfully.

However, by researching and implementing edge solutions where and how they make sense, from trusted vendors with experience building out such deployments, enterprises can make the most of AI while keeping their carbon footprint light, using energy as efficiently as possible to power their new AI-driven operations. Such AI deployments could even help them further optimize power consumption by analyzing and suggesting ways for enterprises to further reduce power consumption on devices, using the data gathered on-premises.

There are many vendors out there hawking wares, but clearly, putting AI on the edge is a beneficial path forward for enterprises looking to lower their power bills — and their environmental impacts. And it can certainly take some of the load off the hyperscale data centers.

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


One of the least-discussed topics of the information age is the real-world cost of all the data we generate and consume. Our nomenclature for storing data doesn’t help — the “cloud” sounds wispy and ethereal, and the average user’s interactions with it are designed to be fast, easy, seamless and almost insubstantial.

Our mental picture is often that of a bunch of zeroes and ones floating above and around us, somewhere in cyberspace, untethered to our world, whose forms we can only make out and manipulate through the layers of glass and metal on our mobile device touchscreens and computer keyboards, like the flickering shadows on the walls of Plato’s proverbial cave.

But of course, there is a very real, tangible, physical toll to the cloud: the energy required to run the servers on which the data is stored and applications are run, and the greenhouse gases produced as a result.

On average, the “hyperscale” data centers used by large tech companies such as Google, Meta, Apple, and Amazon consume between 20 to 100 megawatts of electricity annually, enough to power up to 37,000 homes. Though tech companies are proud to crow about their investments in solar, wind, hydro and other renewables for powering their data centers, the reality is data centers, like most of the rest of the world, are still reliant on fossil fuels.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

As data centers’ energy appetites grow, with projections indicating a leap from 3% to 4% of total global electricity consumption by 2030, companies must find alternatives.

One path that has emerged is that of increased investments in edge computing — that is, deploying smaller-scale computers, sensors, and servers not in a massive dedicated data center somewhere, but out in the field, on the floors of factories and retail outlets where work is being done and business is being physically transacted.

At the same time, the sudden burst of interest from enterprises in using generative AI has increased demands for graphical processing units (GPUs) and for the server space necessary to store the vast volumes of data necessary for training large language models (LLMs) and other foundational models. In some ways, this is an unhelpful trend for energy consumption of databases and data centers, as it acts as a countervailing force towards the move towards lower-power-edged devices.

Or does it? Several companies have begun offering “AI on the edge” compute and software solutions, looking to provide organizations with the technology necessary for running AI applications out in the field, taking some of the energy demands away from the cloud and reducing the overall energy needs, and therefore, emissions.

The edge advantage: lower-power devices

The crux of edge computing’s allure lies in its capacity to mitigate the energy challenges posed by the digital transformation wave sweeping across the globe.

By reducing the amount of data transmitted over networks to central data centers for processing, edge computing minimizes consumption. In addition, most edge devices have far lower power than their datacenter or centralized compute counterparts.

The localized processing approach also means data is handled closer to where it is generated or needed, reducing latency and saving energy. The transition to edge computing is more than a mere technical shift; it’s a significant stride towards a more sustainable and energy-efficient computing landscape.

“AI at the edge is set to revolutionize enterprises by enhancing efficiency, enabling real-time
decision-making, and fostering innovation,” wrote Krishna Rangasayee, CEO and founder of SiMa.ai, in an email to VentureBeat.

Rangasayee would know as SiMa.ai, a five-year-old startup based in San Diego, California, makes its own drag-and-drop, no-code AI app software and AI edge device chips.

In September 2023, SiMa introduced Palette Edgematic, a platform allowing enterprises to rapidly and easily build and deploy AI applications on edge devices, specifically those leveraging SiMa’s MLSoC silicon chips (manufactured to spec by leading supplier Taiwan Semiconductor, TMSC). Already, the company has proven its worth to such important clientele as the U.S. military, showing one edge deployment on a drone was able to boost video capture and analysis from 3-frames-per-second up to 60.

“We knew what worked for AI and ML in the cloud would be rendered useless at the
edge, so we set out to exceed the performance of the cloud and adhere to the power constraints
of the edge,” Rangasayee said.

Edge requirements are different than data center requirements

Another company pursuing AI at the edge to reduce power requirements while still leveraging the analytical power of AI is Lenovo.

Though known best to consumers as a PC and device-maker, Lenovo’s new TruScale for Edge and AI service, which also debuted in September 2023, takes Lenovo’s hardware experience and puts it toward a new form factor — the ThinkEdge SE455 V3 server with AMD’s EPYC 8004 series processors, designed to run quietly in the back office of a retail outlet, grocery store, or even on a commercial fishing boat in the middle of the Atlantic Ocean.

Lenovo is also supplying software, namely 150+ turnkey AI solutions, through its new TruScale for Edge and AI subscription SaaS offering.

“Phones, tablets, laptops, cameras and sensors everywhere will double the world’s data over the next few years, making computing at the edge, or remote locations, critical to delivering on the promise of AI for all businesses,” said Scott Tease, General Manager of HPC and AI at Lenovo. “Across Lenovo, we are focused on bringing AI to the data through next-generation edge-to-cloud solutions.”

According to Lenovo’s estimates, fully “75% of compute” — the actual hardware/software mix needed to run applications — is poised to move toward the edge.

But acknowledging this trend is coming is one thing. It’s another, more challenging set of tasks entirely to create the infrastructure to make it happen.

“The server technology needs to be able to withstand the environment, be compact and nonobstrusive while delivering advanced computing capable of delivering AI-powered insights,” Tease said.

How would you like your edge: thick or thin?

Splunk, the enterprise data software firm that was recently acquired by Cisco for a staggering $28 billion, differentiates between “thick edge” and “thin edge,” and helps its customers differentiate between these two categories of compute — and identify which is right for them.

While the terminology is still new and evolving, “thick edge” refers to the kind of computing hardware/software solutions Lenovo mentioned above in this piece — those where the data is processed and analyzed on-site, or close to where it is collected.

“Thin edge,” is deployments where smaller, lower-powered sensors and computing hardware is installed to collect data, but only minimal operations are run at the site of the collection, and most of the processing power occurs back up in the cloud. Splunk’s new Edge Hub, an edge computing terminal with its own OS debuted by the company in July, is designed specifically for these type of deployments.

“Running Splunk Enterprise On-Premise is commonly mentioned as the ‘thick edge’ because the compute power typically provided is powerful enough to run several of Splunk’s AI offerings today,” said Hao Yang, Head of AI at Splunk, in an email provided to VentureBeat. “Splunk is also a leader invested in AI on the ‘thin edge’ with our new Splunk Edge Hub. This allows for AI models to be applied for use cases that need to run on tighter resources closer to the data source.”

Both cases offer opportunities for enterprises to reduce the energy consumption of their data gathering and processing, but clearly, by virtue of the way it is construed and architected, “thick edge” offers far more potential power savings.

Regardless, Splunk is ready to support enterprises in their thick and thin edge deployments and to make the most of them in an energy-efficient way, even as they look to embrace compute resource-intensive AI models.

“For large models that can effortlessly run in the cloud, an effective strategy includes quantization, so that the leading foundational AI models with trillions of parameters can be optimized to run on an edge device while maintaining accuracy,” explained Yang. “This also highlights the need to understand how hardware can be optimized for AI and how to adapt a model to take advantage of varying hardware architecture in GPUs (graphics processing unit) and NPUs.”

One important tenet to Splunk’s philosophy around AI is that of “human-in-the-loop.”

As Splunk CEO Gary Steele told The Wall Street Journal in a recent interview: “You are not just going to let an AI agent reconfigure your network. You are going to be really super-thoughtful about the next steps that you take.”

Instead, Splunk’s systems allow enterprises to deploy AI that makes recommendations but ultimately keeps humans in charge of making decisions. This is especially critical for edge deployments, where, power savings aside, the AI app has the chance to more directly impact the workplace since it is situated in and among it.

Splunk also wants to ensure that enterprises are prepared to come in with their own unique data to refine the AI apps they plan to use, as doing so will be critical to the ultimate success of an AI at the edge deployments.

“Many attempts at deploying AI fall short because base models need to be refined with unique data,” Wang told VentureBeat. “Every enterprise is different and Splunk Edge Hub provides that ability to gather data from the Edge and ensure AI will meet the job it is set out to do. This speaks to Splunk’s value in the Human-in-the-loop approach, and making sure that to properly deploy AI, it can be understood and adjusted.”

Where AI at the edge is headed next, and what it means for energy efficiency

Despite regulatory ambiguity and vocal pushback from creatives and advocates, the rush among enterprises to adopt AI shows no signs of slowing down.

This will push more companies to run power-intensive AI models, which could increase the total energy consumption from enterprises meaningfully.

However, by researching and implementing edge solutions where and how they make sense, from trusted vendors with experience building out such deployments, enterprises can make the most of AI while keeping their carbon footprint light, using energy as efficiently as possible to power their new AI-driven operations. Such AI deployments could even help them further optimize power consumption by analyzing and suggesting ways for enterprises to further reduce power consumption on devices, using the data gathered on-premises.

There are many vendors out there hawking wares, but clearly, putting AI on the edge is a beneficial path forward for enterprises looking to lower their power bills — and their environmental impacts. And it can certainly take some of the load off the hyperscale data centers.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!