AI & RoboticsNews

Now for the hard part: Deploying AI at scale

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


The enterprise is quickly discovering the many ways AI can streamline and improve processes, but so far, most of these successes are happening at limited scale. Like any technology, AI functions well in controlled situations, but pushing it far and wide throughout an increasingly diversified data ecosystem is not without its perils.

At scale, the enterprise is no longer a cohesive, fully integrated digital environment, but a loose collection of processes, platforms, and cultures. Of course, AI promises to change all that (or at least paper it over), but in a Catch-22, it really can’t function at scale until it achieves scale — meaning there is still a lot of work to do before organizations can push the value proposition of AI to its limits.

Assembly-line AI

Researchers at McKinsey & Co. liken this problem to a company that builds each product from scratch, with no standardization or consistency among components, processes, or quality control. For AI to scale across the enterprise, it must be placed on the digital equivalent of a production line where teams across the organization can turn out production-ready, risk-compliant, and reliable models.

Fortunately, AI tools and platforms have evolved to the point in which more governable, assembly-line approaches to development are possible, most of which are being harnessed under the still-evolving MLOps model. MLOps is already helping to cut the development cycle for AI projects from months, and sometimes years, down to as little as two weeks. Using standardized components and other reusable assets, organizations are able to create consistently reliable products with all the embedded security and governance policies needed to scale them up quickly and easily.

Full scalability will not happen overnight, of course. Accenture’s Michael Lyman, North American lead for strategy and consulting, says there are three phases of AI implementation. It begins with the initial proof of concept, followed by a period of strategic scaling, and finally a point at which the organization is “industrialized for growth.” Each phase will require further refinement of three key principles:

  • Driving “intentional AI” by managing expectations and establishing clearly defined strategies, operating models and timelines
  • Turning off data noise by carefully managing both the internal and external data being used to train AI. Like any other technology, AI is subject to garbage-in/garbage-out
  • Treating AI as a team sport by encouraging cross-platform and multi-disciplinary deployment

Among a recent survey of 1,500 CXOs, Accenture found that the two common denominators among companies that have industrialized AI for growth are broad democratization across the workforce and tight alignment with growth priorities.

Unique enterprise, unique problems

No two companies are alike, however, so everyone’s transition to full-scale AI will be different. Josh Perkins, CTO of business platform developer AHEAD, notes that the key to a successful scaling strategy is to identify your specific challenges ahead of time so you can start plotting the solution. To accelerate this process, he recommends a series of steps, such as starting out with the best use cases and then drafting a playbook to help guide managers through the training and development process. From there, you’ll need to hone your institutional skills around key functions like data and security analysis, process automation and the like. Improving data delivery and quality assurance will also come into play, as will anticipating the cultural shift that will arise under this new working environment. And, of course, all of this will require continual monitoring and evaluation to ensure the program remains aligned with goals and objectives.

Clearly, this will be nowhere near as easy as it sounds. As Sumanth Vakada, founder and CEO of Qualetics Data Machines explained recently, it involves not just the transformation of the organizational operating model, but a bi-directional implementation framework that enables key actions to flow from both the top down and the bottom up.

And don’t expect all of this to happen on a limited budget, either. Few organizations are starting this journey from an ideal place to begin with. Most are saddled with insufficient data structure, dedicated resources, siloed work cultures, and a host of other limiting factors.

But this transformation is not optional. The world is moving in this direction, so to delay is to risk being left behind. This means the question facing enterprise executives today is not whether to scale AI, but how to do it in the least disruptive manner and with maximum ROI.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Arthur Cole
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!