AI & RoboticsNews

How edge AI can make enterprises more agile

Elevate your enterprise data technology and strategy at Transform 2021.


The pandemic has accelerated the adoption of edge computing, or computation and data storage that’s located close to where it’s needed. According to the Linux Foundation’s State of the Edge report, digital health care, manufacturing, and retail businesses are particularly likely to expand their use of edge computing by 2028. This is largely because of the technology’s ability to improve response times and save bandwidth while enabling less constrained data analysis.

While only about 10% of enterprise-generated data is currently created and processed outside a traditional datacenter or cloud, that’s expected to increase to 75% by 2022, Gartner says. Internet of things (IoT) devices alone are expected to create over 175 zettabytes of data in 2025. And according to Grand View Research, the global computing market is anticipated to be worth $61.14 billion by 2028.

Edge computing represents a powerful paradigm shift, but it has the potential to become even more useful when combined with AI. “Edge AI” describes architectures in which AI models are processed locally, on devices at the edge of the network. As edge AI setups typically only require a microprocessor and sensors, not an internet connection, they can process data and make predictions in real time (or close to it).

The business value of edge AI could be substantial. According to Markets and Markets, the global edge AI software market is anticipated to grow from $590 million in 2020 to $1.83 billion by 2026. Deloitte estimates more than 750 million edge AI chips that perform tasks on-device have been sold to date, representing $2.6 billion in revenue.

What is edge AI?

Most AI processes are carried out using cloud-based datacenters that need substantial compute capacity. These expenses can add up quickly. According to Amazon, the cost of inference — i.e., when a model makes predictions — constitutes up to 90% of machine learning infrastructure costs. And a recent study conducted by RightScale found that cost savings was behind cloud initiatives in over 60% of organizations surveyed.

Edge AI, by contrast, requires little to no cloud infrastructure beyond the initial development phase. A model might be trained in the cloud but deployed on an edge device, where it runs without (or mostly without) server infrastructure.

As Machine Learning Quant notes, edge AI hardware generally falls into one of three categories: (1) on-premise AI servers, (2) intelligent gateways, and (3) edge devices. AI servers are machines with specialized components designed to support a range of model inferencing and training applications. Gateways sit between edge devices, servers, and other elements of the network. And edge devices perform AI inference and training functions, although training can be delegated to the cloud with only inference performed on-device.

The motivations for deploying AI hardware at the edge are myriad, but they often center around data transmission, storage, and privacy considerations. For example, in an industrial or manufacturing enterprise with thousands of sensors, it’s usually not practical to send vast amounts of sensor data to the cloud, have the analytics carried out, and return the results to the manufacturing location. Sending that data would require huge amounts of bandwidth, as well as cloud storage, and could potentially expose sensitive information.

Edge AI also opens the door to using connected devices and AI applications in environments where reliable internet might not be a given, like on a deep-sea drilling rig or research vessel. Its low latency also makes it well-suited to time-sensitive tasks, like predictive failure detection and smart shelf systems for retail using computer vision.

Edge AI in practice

Practically, edge AI incorporates a sensor — for example, an accelerometer, gyrometer, or magnetometer — connected to a small microcontroller unit (MCU), Johan Malm, Ph.D. and specialist in numerical analysis at Imagimob, explains in a whitepaper. The MCU is loaded with a model that’s been trained on typical scenarios the device will encounter. This is the learning part, which can be a nonstop process through which the device learns as it encounters new things.

For example, some factories use sensors mounted on motors and other equipment configured to stream signals — based on temperature, vibration, and current — to an edge AI platform. Instead of sending the data to the cloud, the AI analyzes the data continuously and locally to make predictions for when a particular motor is about to fail.

Another use case for edge AI and computer vision is automated optical inspection on manufacturing lines. In this case, assembled components are sent through an inspection station for automated visual analysis. A computer vision model detects missing or misaligned parts and delivers results to a real-time dashboard showing inspection status. Because the data can flow back into the cloud for further analysis, the model can be regularly retrained to reduce false positives.

Establishing a virtuous cycle for retraining is an essential step in AI deployment. A clear majority of employees (87%) peg data quality issues as the reason their organizations failed to successfully implement AI and machine learning, according to a recent Alation report. And 34% of respondents to a 2021 Rackspace survey cited poor data quality as the reason for AI R&D failure.

“Many of our customers literally deploy hundreds of thousands of sensors … In these IoT scenarios, it’s not just a matter of IoT — it’s IoT plus intelligent processing so that machine learning can be applied to get insights that improve safety and efficiency,” Amazon CTO Werner Vogels told VentureBeat in a previous interview. “There’s a lot of processing that happens in the cloud because most [AI model training] is very labor-intensive, but processing often happens at the edge. Massive, heavy compute will [have a place] in the cloud for model training and things like that. However, their workloads aren’t real-time critical most of the time. For our real-time critical operations, models must be moved onto edge devices.”

Challenges

Edge AI offers advantages compared with cloud-based AI technologies, but it isn’t without challenges. Keeping data locally means more locations to protect, with increased physical access allowing for different kinds of cyberattacks. (Some experts argue the decentralized nature of edge computing leads to increased security.) Compute power is limited at the edge, which restricts the number of AI tasks that can be performed. And large complex models usually have to be simplified before they’re deployed to edge AI hardware, in some cases reducing their accuracy.

Fortunately, emerging hardware promises to alleviate some of the compute limitations at the edge. Startups Sima.ai, AIStorm, HailoEsperanto TechnologiesQuadricGraphcoreXnor, and Flex Logix are developing chips customized for AI workloads — and they’re far from the only ones. Mobileye, the Tel Aviv company Intel acquired for $15.3 billion in March 2017, offers a computer vision processing solution for AVs in its EyeQ product line. And Baidu last July unveiled Kunlun, a chip for edge computing on devices and in the cloud via datacenters.

In addition to chips, there’s an increasing number of development motherboards available, including Google’s Edge TPU and Nvidia’s Jetson Nano. Microsoft, Amazon, Intel, and Asus also offer hardware platforms for edge AI deployment, like Amazon’s DeepLens wireless video camera for deep learning.

These are among the developments encouraging enterprises to forge ahead. According to a 2019 report from Gartner, more than 50% of large organizations will deploy at least one edge computing use case to support IoT or immersive experiences by the end of 2021, up from less than 5% in 2019. The number of edge computing use cases will jump even further in the upcoming years, with Gartner predicting that more than half of large enterprises will have at least six edge computing use cases deployed by the end of 2023.

Edge AI’s benefits make its deployment a wise business decision for many organizations. Insight predicts an average ROI of 5.7% from intelligent edge deployments over the next three years. In segments like automotive, health care, manufacturing, and even augmented reality, AI at the edge can reduce costs while supporting greater scalability, reliability, and speed.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!