AI & RoboticsNews

Executives discuss top challenges in deploying AI — and how to solve them

Did you miss a session at the Data Summit? Watch On-Demand Here.


Hastened by a widespread move to digitize operations, the enterprise is enthusiastically embracing AI. According to IDC’s 2022 AI InfrastructureView survey, 31% of companies say that they now have AI in production while the majority are actively piloting AI technologies. Increasingly, adopting AI is leading to boosted profitability, with 27% of businesses responding to a December 2021 McKinsey survey claiming that at least 5% of their earnings before interest and taxes (EBIT) are now attributable to AI.

But there remain many hurdles to successfully implementing AI. Of the companies participating in the AI InfrastructureView poll, only one-third claim to have reached a “mature” state of adoption wherein their entire organization is benefitting from an enterprise-wide AI strategy. Moreover, while nearly two-thirds of companies in the McKinsey survey say that they’ll continue to increase their investments in AI over the next three years, half admitted experiencing higher-than-expected AI project costs.

Data science disconnect

Why is getting AI projects into production so challenging? The reasons vary, according to Jeff Boudier, head of product and growth at AI language startup Hugging Face. But commonly, companies fail to establish systems that would allow their data science teams — the teams responsible for deploying AI technologies — to properly version and share AI models, code, and datasets, he says. This creates more work for AI project managers, which have to keep track of all the models and datasets created by teams so that they don’t reinvent the wheel for each business request.

“Today, data science is largely done in ‘single player’ mode, where code lives in notebooks on local machines,” Boudier told VentureBeat via email. “It’s how business software was done 15 years ago, before modern version control systems and … collaboration workflows changed the day.”

The emerging discipline of MLOps, which stands for “machine learning operations” (a term coined by Gartner in 2017), aims to address the disparate and siloed nature of AI development by establishing practices for collaboration between data scientists. By simplifying AI management processes, the goal of MLOps is to automate the deployment of AI models into the core software systems of an organization.

For example, startups like ZenML enable data scientists to express their workflows as pipelines that, with configuration changes, can accommodate different infrastructure and dev tools. These can build into a framework to solve reproducibility and versioning problems, reducing the need to coordinate between DevOps teams and data scientists.

Increasing size — and data requirements

But collaboration isn’t the only hurdle facing companies adopting AI. Others are consequences of machine learning models continuing to exponentially increase in size, according to Boudier. Large models often don’t fit on commodity hardware and can be slow and expensive to run. Or they’re locked into proprietary APIs and services and dubiously touted as universal problem solvers.

“[Proprietary models hamper] AI adoption as … teams can’t dive into the code and properly evaluate or improve the models, and continues to create confusion on how to approach AI problems pragmatically,” Boudier said. “Deploying large models in production to be applied on large amounts of data requires diving into the model graph down to the hardware, which requires skills most companies do not have.”

Sean Hughes, ecosystem director at ServiceNow, says that companies often expect too much from AI models without doing the work necessary in order to adapt them for their business. But that can lead to other problems, including a lack of data available to fine-tune the models in each context where they’ll be used. In a 2019 Dun & Bradstreet survey, companies rated a lack of data on par with a lack of internal expertise as the top setbacks to further implementing AI across their organizations.

“Hype and sensationalism generated when AI research scientists open source work that achieves new state-of-the-art benchmark results can be misinterpreted by the general public as being the same as ‘problem solved.’ But the reality is that state-of-the-art for a specific AI solution might only achieve 78% accuracy for a well-defined and controlled configuration,” Hughes told VentureBeat via email. “[A major challenge is] the expectation of the enterprise user that [an off-the-shelf] model will understand the nuances of the enterprise environment in order to be useful for decision-making … [Without the required data,] even with the potential for AI to suggest a directionally correct next best action, it can’t, since it doesn’t understand the context of the user intent in that enterprise.”

On the same page

Feiyu Xu, SVP and global head of AI at SAP, concurs, adding that AI projects have the best chance of success when there’s alignment between lines of business and AI technology teams. This alignment can foster “focused” and “scalable” solutions for delivering AI services, she asserts, and touch on ethical problems that might crop up during ideation, development, or deployment.

“The best use cases of AI-powered applications ensure the AI technologies are fully embedded and automated for end users. Also, AI systems work best when experts securely use real business data to train, test, and deploy the AI services,” Xu said. “Companies need to clearly define guidelines and guardrails to ensure that ethical issues are carefully considered in the development of new AI services from the outset. In addition, it’s important to include external, independent experts to review cases and topics in question on a regular basis.”

On the subject of data-related challenges in AI deployment, Xu points to the emergence of platform-as-a-service solutions designed to help both developers and non-developers link data sources across different backend systems. Torch.AI, for instance, connects apps, systems, services, and databases to enable reconciliation and processing of both unstructured and structured data for AI applications.

“AI plays a key role in empowering companies and industries to become intelligent enterprises,” Xu said. “Most users of AI have little experience in software development to design, change, and improve their own workflows and business applications. This is where an intuitive, no-code development environment for functions like intelligent process automation, workflow management, and robotic process automation can really help.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!