AI & RoboticsNews

Getting AI from the lab to production

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


The enterprise is eager to push AI out of the lab and into production environments, where it will hopefully usher in a new era of productivity and profitability. But this is not as easy as it seems because it turns out that AI tends to behave much differently in the test bed than it does in the real world.

Getting over this hump between the lab and actual applications is quickly emerging as the next major objective in the race to deploy AI. Since intelligent technology requires a steady flow of reliable data to function properly, a controlled environment is not necessarily the proving ground that it is for traditional software. With AI, the uncontrolled environment is now the real test, and many models are failing.

The ‘Valley of Death’

Crossing this “Valley of Death” has become so crucial that some organizations are elevating it to an executive-level core competency. Valerie Bécaert, senior director of research and scientific programs at ServiceNow’s Advanced Technology Group (ATG), now leads the company’s research into bridging this gap. As she explained to Workflow recently, it’s not just a matter of training the AI properly, but in transforming organizational culture to improve AI skills and foster greater acceptance of risk.

One technique the group is working on is to train AI with limited data so it can learn new truths on their own. Real-world data environments, after all, are vastly largely than the lab, with data coming in from countless sources. Instead of simply throwing rudimentary models into this chaotic environment, low-data learning provides a simplified pathway to more effective models that can extrapolate more complex conclusions based on their acquired knowledge.

A recent report by McKinsey & Co., highlighted some of the ways leading AI practitioners – which the company defines as those who can attribute 20%of the EBIT to AI – are pushing projects into production steadily and reliably. Among core best practices, the company defined the following:

  • Employ design thinking when developing tools
  • Test performance internally before deployment and track performance in production to ensure outcomes show steady improvement
  • Establish well-defined data governance processes and protocols
  • Develop technology personnel’s AI skills

Other evidence seems to suggest that the cloud provides an advantage when deploying AI into production environments. In addition to cloud’s broad scalability, it also offers a wide range of tools and capabilities, such as natural language understanding (NLU) and facial recognition.

AI’s accuracy and precision

Still, part of the problem of putting AI into production is with the AI model itself. Android developer Harshil Patel noted on Neptune recently that most models make predictions with high accuracy but low precision. This is a problem for business models that require exact measurements with little tolerance for errors.

To counter this, organizations need to take better care at eliminating outlier data sets in the training process, as well as implement continuous monitoring to ensure bias and variance do not creep into the model over time. Another issue is class imbalance, which occurs when instances of one class are more common than another. This can skew results away from real-world experiences, particular as data sets from new domains are introduced.

In addition to the technological inhibitors to production-ready AI, there are also cultural factors to consider, says Andrew NG, adjunct professor at Stamford University and founder of deeplearning.ai. AI tends to disrupt the work of numerous stakeholders in the enterprise. Without their buy-in, hundreds of hours of development and training goes to waste. This is why AI projects should not only be effective and helpful to those who will use them, but they should be explainable as well. The first step in any project, then, should be defining the scope, in which technical and business teams meet to determine the intersection of “what AI can do” and “what is most valuable to business.”

The history of technology is rife with examples of solutions in search of problems. AI has the advantage of being so flexible that one failed solution can be quickly reconfigured and redeployed, but this can become costly and futile if the right lessons are not learned from the failures.

As the enterprise moves forward with AI, the challenge will not to be push the technology to its conceivable limits, but to ensure that the effort put into developing and training AI models are focused on solving the real problems of today while ensuring they can then pivot to the problems that emerge in the future.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Arthur Cole
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!