AI & RoboticsNews

Scaling AI and data science – 10 smart ways to move from pilot to production

“Fantastic! How fast can we scale?” Perhaps you’ve been fortunate enough to hear or ask that question about a new AI project in your organization. Or maybe an initial AI initiative has already reached production, but others are needed — quickly.

At this key early stage of AI growth, enterprises and the industry face a bigger, related question:   Business and technology leaders must ask: What’s needed to advance AI (and by extension, data science) beyond the “craft” stage, to large-scale production that is fast, reliable, and economical?

The answers  are crucial to realizing ROI, delivering on the vision of “AI everywhere”, and helping the technology mature and propagate over the next five years.

Beating “The Pilot Paradox” 

Unfortunately, scaling AI is not a new challenge. Three years ago, Gartner estimated that less than 50% of AI models make it to production. The latest message was depressingly similar. “Launching pilots is deceptively easy,” analysts noted, “but deploying them into production is notoriously challenging.” A McKinsey global survey agreed, concluding: “Achieving (AI) impact at scale is still very elusive for many companies.”

Clearly, a more effective approach is needed to extract value from the $327.5 billion that organizations are forecast to invest in AI this year.

As the scale and diversity of data continues to grow exponentially, data science and data scientists are increasingly pivotal to manage and interpret that data. However, the diversity of AI workflows means that the data scientists need expertise across a wide variety of tools, languages, and frameworks that focus on data management, analytics modeling and deployment, and business analysis. There is also increased variety in the best hardware architectures to process the different types of data.

Intel helps data scientists and developers operate in this “wild wild West” landscape of diverse hardware architectures, software tools, and workflow combinations. The company believes the keys to scaling AI and data science are an end-to-end AI software ecosystem built on the foundation of the open, standards-based, interoperable oneAPI programming model, coupled with an  extensible, heterogeneous AI compute infrastructure.

“AI is not isolated,” says Heidi Pan, senior director of data analytics software at Intel.  “To get to market quickly, you need to grow AI with your application and data infrastructure.  You need the right software to harness all of your compute.”

She continues, “Right now, however, there are lots of silos of software out there, and very little interoperability, very little plug and play. So users have to spend a lot of their time cobbling multiple things together. For example, looking across the data pipeline; there are many different data formats, libraries that don’t work with each other, and workflows that can’t operate across multiple devices. With the right compute, software stack, and data integration, everything can work seamlessly together for exponential growth.”

Get the most out of your data and data scientists

Creation of an end-to-end AI production infrastructure is an ongoing, long-term effort. But here are 10 things enterprises can do right now that can deliver immediate benefits. Most importantly, they’ll help  unclog bottlenecks with data scientists and data, while laying the foundations for stable, repeatable AI operations.

1. Stick with familiar tools and workflows

Consider the following from Rise Labs at UC Berkeley. Data scientists, they note, prefer familiar tools in the Python data stack: pandas, scikit-learn, NumPy, PyTorch, etc. “However, these tools are often unsuited to parallel processing or terabytes of data.” So should you adopt new tools to make the software stack and APIs scalable? Definitely not!, says Rise. They calculate that it would take up toto recoup the upfront cost of learning a new tool, even if it performs 10x faster.

These astronomical estimates illustrate why modernizing and adapting familiar tools are much smarter ways to solve data scientists’ critical AI scaling problems. Intel’s work through the Python Data API Consortium, the modernizing of Python via numba’s parallel compilation and Modin’s scalable data frames, Intel Distribution of Python, or upstreaming of optimizations into popular deep learning frameworks such as TensorFlow, PyTorch, and MXNet and gradient boosting frameworks such as xgboost and catboost are all examples of Intel helping data scientists get productivity gains by maintaining familiar workflows.

2. Add “drop-in” software AI acceleration

Hardware AI accelerators such as GPUs and specialized ASICs can deliver impressive performance improvements. But software ultimately determines the real-world performance of computing platforms. Software AI accelerators, performance improvements that can be achieved through software optimizations for the same hardware configuration, can enable large performance gains for AI across deep learning, classical machine learning, and graph analytics. This orders of magnitude software AI acceleration is crucial to fielding AI applications with adequate accuracy and acceptable latency and is key to enabling “AI Everywhere”.

Intel optimizations can deliver drop-in 10-to-100x performance improvements for popular frameworks and libraries in deep learning, machine learning, and big data analytics. These gains translate into meeting real-time inference latency requirements, running more experimentation to yield better accuracy, cost-effective training with commodity hardware, and a variety of other benefits.

Below are example training and inference speedups with Intel Extension for Scikit-learn, the most widely used package for data science and machine learning. Note that  accelerations ranging up to 322x for training and 4,859x for inference are possible just by adding a couple of lines of code!

Scaling AI and data science

 

Figure 1.speedup with Intel Extension for Scikit-learn over the original package

Figure 2.speedup with Intel Extension for Scikit-learn over the original package

3. Scale up the size of data sets

Data scientists spend a lot of time trying to cull and downsize data sets for feature engineering and models in order to get started quickly despite the constraints of local compute. But not only do the features and models not always hold up with data scaling, they also introduce a potential source of human ad hoc selection bias and probable explainability issues.

New cost-effective persistent memory makes it possible to work on huge, terabyte-sized  data sets and bring them quickly into production. This helps with speed, explainability, and accuracy that come from being able to refer back to a rigorous training process with the entire data set.

4. Maximize code reuse

While CPUs and the vast applicability of their general-purpose computing capabilities are central to any AI strategy, a strategic mix of XPUs (GPUs, FPGAs, and other specialized accelerators) can meet the specific processing needs of today’s diverse AI workloads.

“The AI hardware space is changing very rapidly,” Pan says, “with different architectures running increasingly specialized algorithms. If you look at computer vision versus a recommendation system versus natural language processing, the ideal mix of compute is different, which means that what it needs from software and hardware is going to be different.”

While using a heterogeneous mix of architectures has its benefits, you’ll want to eliminate the need to work with separate code bases, multiple programming languages, and different tools and workflows. According to Pan, “the ability to reuse code across multiple heterogeneous platforms is crucial in today’s dynamic AI landscape.”

Central to this is oneAPI, a cross-industry unified programming model that delivers a common developer experience across diverse hardware architectures. Intel’s Data Science and AI tools such as the Intel oneAPI AI Analytics Toolkit and the Intel Distribution of OpenVINO toolkit are built on the foundation of oneAPI and deliver hardware and software interoperability across the end to end data pipeline.

Intel AI software tools

Figure 3. Intel AI Software Tools

5. Turn laptops into analytic data centers

The ubiquitous nature of laptops and desktops make them a vast untapped data analytics resource. When you make it fast enough and easy enough to instantaneously iterate on large data sets, you can bring that data directly to the domain experts and decision makers without having to go indirectly through multiple teams.

OmniSci and Intel have partnered on an accelerated analytics platform that uses the untapped power of CPUs to process and render massive volumes of data at millisecond speeds. This allows data scientists and others to analyze and visualize complex data records at scale using just their laptops or desktops. This kind of direct, real-time decision making can cut down time to insight from weeks to days, according to Pan, further speeding production.

6. Scale out seamlessly from the local workstation to infinite cloud

AI development often starts with prototyping on a local machine but invariably needs to be scaled out to a production data pipeline on the data center or cloud due to expanding scope. This scale out process is typically a huge and complex undertaking, and can often lead to code rewrites, data duplication, fragmented workflow, and poor scalability in the real world.

The Intel AI software stack lets one scale out their development and deployment seamlessly from edge and IOT devices to workstations and servers to supercomputers and the cloud.  Explains Pan: “You make your software that’s traditionally run on small machines and small data sets to run on multiple machines and Big Data sets, and replicate your entire pipeline environments remotely.” Open source tools such as Analytics Zoo and Modin can move AI from experimentation on laptops to scaled-out production.

7. Accelerate production workflow with extra machines, not data scientists

Throwing bodies at the production problem is not an option. The U.S. Bureau of Labor Statistics predicts that roughly 11.5 million new data science jobs will be created by 2026, a 28% increase, with a mean annual wage of $103,000. While many training programs are full, competition for talent remains fierce. As the Rise Institute notes: “Trading human time for machine time is the most effective way to ensure that data scientists are productive.” In other words,  it’s smarter to drive AI production with cheaper computers rather than expensive people.

Intel’s suite of AI tools place a premium on developer productivity while also providing resources for seamless scaling with extra machines.

8. Build AI on top of your existing data infrastructure

For some enterprises, growing AI capabilities out of their existing data infrastructure is a smart way to go.  Doing so can be the easiest way to build out AI because it takes advantage of data governance and other systems already in place.

Intel has worked with partners such as Oracle to provide the “plumbing” to help enterprises incorporate AI into their data workflow. Oracle Cloud Infrastructure Data Science environment, which includes and supports several Intel optimizations, helps data scientists rapidly build, train, deploy, and manage machine learning models.

Intel’s Pan points to Burger King as a great example of leveraging existing Big Data infrastructure to quickly scale AI. The fast food chain recently  collaborated with Intel to create an end-to-end, unified analytics/AI recommendation pipeline and rolled out a new AI-based touchscreen menu system across 1,000 pilot  locations. A key: Analytics Zoo, a unified big data analytics platform that allows seamless scaling of AI models to big data clusters with thousands of nodes for distributed training or inference.

9. Shorten time to market with “Push to Start AI”

 It can take a lot of time and resources to create AI from scratch. Opting for the fast-growing number of turnkey or customized vertical solutions on your current infrastructure makes it possible to unleash valuable insights faster and at lower cost than before.

The Intel Solutions Marketplace and AI builders program offer a rich catalog of over 200 turnkey and customized AI solutions and services that span from edge to cloud. They deliver optimized performance, accelerate time to solution, and lower costs.

The District of Columbia Water and Sewer Authority (DC Water), worked with Intel partner Wipro to develop “Pipe Sleuth”, an AI solution that uses deep learning- based computer vision to automate real-time analysis of video footage of the pipes. Pipe Sleuth was optimized for the Intel Distribution of OpenVINO toolkit and Intel Core i5, Intel Core i7 and Intel Xeon Scalable processors, and provided DC water with a highly efficient and accurate way to inspect their underground pipes for possible damage.

10. Use open, interoperable data and API standards

Open and interoperable standards are essential to deal with the ever-growing number of data sources and models. Different organizations and business groups will bring their own data and data scientists solving for disparate business objectives will need to bring their own models. Therefore, no single closed software ecosystem can ever be broad enough or future-proof to be the right choice.

As a founding member of the Python Data API consortium, Intel works closely with the community to establish standard data types that interoperate across the data pipeline and heterogeneous hardware, and foundational APIs that span across use cases, frameworks, and compute.

Building a scalable AI future

An open, interoperable, and extensible AI Compute platform helps solve today’s bottlenecks in talent and data while laying the foundation for the ecosystem of tomorrow. As AI continues to pervade across domains and workloads, and new frontiers emerge, the need for end-to-end data science and AI pipelines that work well with external workflows and components is immense.   Industry and community partnerships that build open, interoperable compute and software infrastructures are crucial to a brighter, scalable AI future for everyone.

Learn More: Intel AI, Intel AI on Medium

sales@venturebeat.com


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!