Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Businesses are increasingly gaining competitive advantage by deploying artificial intelligence (AI) using distributed hybrid cloud architecture.
This is driven by two factors: First, more data is being generated at the edge than ever before. In fact, Gartner predicts that 50% of enterprise-generated data will be processed outside a traditional data center or cloud by 2025, and a recent global survey found that 78% of IT decision-makers consider moving IT infrastructure to the digital edge a priority for future-proofing their business.
Secondly, moving large sets of data to AI training infrastructure engines in centralized locations for processing means businesses will expend valuable time and expenses. On top of that, compliance and privacy regulations often mandate keeping AI data processing and analysis within the country of origin, which further justifies distributing workloads in multiple countries.
Let’s dig into three different industry use cases where distributed AI is helping organizations save costs, meet regulatory needs and achieve new technological advances.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Gaining real-time retail insights while lowering costs
Many large retailers are finding a competitive advantage by utilizing a distributed digital infrastructure strategy. They are using what IDC recently identified as an increasingly popular AI deployment strategy: Developing AI at the core, such as at the cloud or regional data center, and deploying the AI inference model at the edge, then retraining the model with new regional data to fit the application.
For example, a retailer using a distributed hybrid cloud model might first send its in-store camera feeds and inventory management data to a colocation metro data center to build regional AI models and leverage federated AI methods to consolidate regional models. It then deploys those optimized AI models to store locations to perform low/predictive latency AI model inferencing for insights on inventory, employee shift management, predictions of shopper buying trends and ads placement recommendations.
Deploying AI inference engines from one metro data center location becomes more cost-efficient than maintaining and servicing these servers in every retail location. This distributed AI infrastructure enables retailers to quickly process and analyze insights in one regional location, which ultimately improves their bottom line.
Maintaining privacy and compliance in video surveillance
The majority (71%) of countries around the world have enacted legislation governing privacy and data protection, according to the UNCTAD. Distributed data management and AI architecture can play a key role in helping organizations ensure that they’re compliant.
For example, a large real estate management company with sites in several metro areas all over the world could leverage distributed AI architecture for its hundreds of security cameras all over the world, maintaining compliance with local privacy regulations by deploying AI where the data was collected. Having centralized facilities in different countries where the company operates ensures that it isn’t violating local privacy laws by sending data to another country that might not have the same compliance regulations as the one where the data originated.
In addition to achieving privacy and data usage compliance, this model reduces costs by hosting the AI inference stack at a single metro location rather than at each facility, even while it processes motion detection data on-site at each of its hundreds of locations.
Enabling autonomous driving through regional updates
Autonomous vehicles enabled by advanced driving assistance systems (ADAS) cannot address certain use cases without AI infrastructure. ADAS require AI to make decisions about how the vehicle should interact with its surroundings, especially when interacting with vulnerable road users such as bicyclists and pedestrians.
The amount of data generated by test vehicles to train AI models is enormous — between 20TB and 60TB per car per day for level 2 and 3 ADAS (where the vehicle can adjust speed, brake, and make decisions based on the environment). AI allows connected vehicles to collect and process these large datasets from test fleets quickly and more cost-effectively than they could using a traditional infrastructure.
Distributed AI infrastructure is defining the next generation of vehicular mobility and autonomy. For example, connected vehicles leverage HD maps that give the car information about signage and streets. But what happens when a construction zone or road hazard appears overnight? Instead of each car processing the road hazard individually, distributed AI infrastructure allows for those hazards to be sent to a regional location that then communicates the hazards to all vehicles in the area.
Go with the data flow
Nothing feels the pull of data’s gravity quite like AI. To make the most of their AI infrastructure, organizations will need to evaluate the value of deploying them centrally, regionally or locally. Those that do will save time, money and precious latency speed.
Doron Hendel is head of global business development at Equinix.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Author: Doron Hendel, Equinix
Source: Venturebeat