AI & RoboticsNews

How to navigate the AI jungle with pragmatic steps for enterprise

The infinite monkey theorem professes the idea that a monkey typing for an infinite amount of time would eventually generate the complete works of William Shakespeare, and OpenAI and ChatGPT have unleashed what feels like a form of this.

ChatGPT, or generative AI more broadly, is everything, everywhere, all at once. It feels like magic: Ask a question on anything and get a clear answer. Imagine a picture in your mind and see it instantly visualized. Seemingly overnight, people started to proclaim generative AI either as an existential threat to humanity or the most important technological advancement of all time.

In previous technological waves like machine learning (ML), a consensus formed among experts about the technology’s capabilities and limitations. But with generative AI, the disagreement among even AI scholars is striking. A recent leak of a Google researcher’s memo suggesting that early GenAI pioneers had “no moat” sparked a fiery debate about the very nature of AI.

Just a few months ago, the trajectory of AI had seemed to parallel previous trends like the internet, cloud and mobile technology. Overhyped by some and dismissed as “old news” by others, AI has had diverse effects on fields like healthcare, automotive and retail. But the game-changing impact of interacting with an AI that seems to comprehend and respond intelligently has led to unprecedented user adoption; OpenAI attracted 100 million users within two months. This has, in turn, ignited a frenzy of both zealous endorsements and vehement rebuttals.

Undoubtedly, it’s now evident that generative AI is set to bring about significant changes across enterprises at a pace that far outstrips previous technological shifts. As CIOs and other technology executives grapple with aligning their strategies with this unpredictable yet influential trend, a few guidelines can help steer them through these evolving currents.

Understanding AI’s potential can be overwhelming due to its expansive capabilities. To simplify this, focus on encouraging experimentation in concrete, manageable areas. Encourage the use of AI in areas like marketing, customer service and other more straightforward applications. Prototype and pilot internally ahead of defining complete solutions or working through every exception case (that is, workflows to manage AI hallucinations).

The speed of adoption of generative AI means that entering into long-term contracts with solution providers carries more risk than ever. Traditional category leaders in HR, finance, sales, support, marketing and R&D could face a seismic shift due to the transformative potential of AI. In fact, our very definitions of these categories may undergo a complete metamorphosis. Therefore, vendor relationships should be flexible due to the potentially catastrophic cost of locking in solutions that do not evolve.

That said, the most effective solutions often come from those with deep domain expertise. A select group of these providers will seize the opportunities presented by AI in agile and inventive ways, yielding returns far beyond those typically associated with the implementation of enterprise applications. Engaging with potential revolutionaries can address immediate practical needs within your company and illuminate the broad patterns of AI’s potential impact.

Current market-leading applications may not be able to pivot fast enough, so expect to see a wave of startups launched by veterans who’ve left their motherships.

Large language models (LLMs) will upend sectors like customer support that rely on humans to provide answers to questions. Therefore, incorporating human + AI systems will provide key benefits now and will create data for further improvement. Reinforcement learning from human feedback (RLHF) has been core to the acceleration of these models’ advancements and will be critical to how well and how quickly such systems adapt to and impact business. Systems that produce data that can power future AI systems will create an asset to increase the pace of creation of ever more automated models and functions.

With cloud computing, I ridiculed hybrid on-premise and cloud strategies as mere cloud washing; they were feeble attempts by traditional vendors to maintain their relevance in a rapidly evolving landscape. The remarkable economies of scale and the pace of innovation made it clear that any applications attempting to straddle both realms were destined for obsolescence. The triumphs of Salesforce, Workday, AWS and Google, among others, firmly quashed the notion that a hybrid model would be the industry’s dominant paradigm.

As we embark on the era of generative AI, the diversity of opinions amongst the deepest experts, coupled with the transformative potential of information, signals that it may be premature, even perilous, to entrust the entirety of our efforts to public providers or any one strategy.

With cloud applications, the shift was straightforward: We relocated the environment in which the technology operated. We didn’t provide our cloud providers with unbounded access to sales figures and financial metrics within those applications. In contrast, with AI, information becomes the product itself. Every AI solution thirsts for data and requires it to evolve and advance.

The struggle between public and private AI solutions will be highly contingent on the context and the technical evolution of model architectures. Business and commercial efforts, combined with the importance of real and perceived progress, justify public consumption and partnerships, but in most cases, the gen AI future will be hybrid — a mix of public and private systems.

The generative AI capable of crafting an essay, creating a presentation or setting up a website about your new product differs significantly from the predictive AI technology driving autonomous vehicles or diagnosing cancer via X-rays. How you define and approach the problem is a critical first step that requires an understanding of the scope of capabilities that various AI approaches offer.

Consider this example. If your company is trying to leverage past production data to predict your ability to meet next quarter’s demand, you obtain structured data as inputs and a clear target to assess the quality of the prediction. Conversely, you might task an LLM with analyzing company emails and producing a two-page memo on the likelihood of meeting this quarter’s demand. These approaches seem to serve a similar purpose but are fundamentally distinct in nature.

The personification of AI makes it more relatable, engaging or even contentious. This can add value, facilitating tasks that reliable predictions alone may not be able to tackle. For instance, asking the AI to construct an argument for why a prediction may or may not eventuate can stimulate fresh perspectives on questions with minimal effort. However, it should not be applied or interpreted in the same manner as predictive AI models.

It’s also important to anticipate that these boundaries may shift. The generative AI of the future may very well draft the first — or final — versions of the predictive models you’ll use for your production planning.

In crisis or fast-moving situations, leadership is paramount. Experts will be needed, but hiring a management consultancy to create a moment-in-time AI impact study for your firm is more likely to reduce your ability to navigate this change than to prepare you for it.

Because AI is evolving so quickly, it’s attracting a lot more attention than most new technologies. Even for companies in industries outside of high tech, C-suite executives are regularly seeing AI demos and reading about generative AI in the press. Make sure you regularly update your C-suite about new developments and potential impacts on core functions and business strategies so they connect the right dots. Use demos and prototyping to show concrete relevance to your needs.

Meanwhile, CEOs should drive this level of engagement from their technology leaders, not just to scale learning across the organization, but to assess the efficacy of their leadership. This collective and iterative learning approach is a compass to navigate the dynamic and potentially disruptive landscape of AI.

For centuries, the quest for human flight remained grounded as inventors fixated on mimicking the flapping-wing designs of birds. The tide turned with the Wright brothers, who reframed the problem, concentrating on fixed-wing designs and the principles of lift and control rather than replicating bird flight. This paradigm shift propelled the first successful human flight.

In the realm of AI, a similar reframing is vital for each industry and function. Companies that perceive AI as a dynamic field ripe for exploration, discovery and adaptation will find their ambitions taking flight. Those who approach it with strategies that worked earlier platform shifts (cloud, mobile) will be forced to watch the evolution of their industries from the ground.

Narinder Singh was a cofounder of Appirio and is currently the CEO at LookDeep Health.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The infinite monkey theorem professes the idea that a monkey typing for an infinite amount of time would eventually generate the complete works of William Shakespeare, and OpenAI and ChatGPT have unleashed what feels like a form of this.

ChatGPT, or generative AI more broadly, is everything, everywhere, all at once. It feels like magic: Ask a question on anything and get a clear answer. Imagine a picture in your mind and see it instantly visualized. Seemingly overnight, people started to proclaim generative AI either as an existential threat to humanity or the most important technological advancement of all time.

In previous technological waves like machine learning (ML), a consensus formed among experts about the technology’s capabilities and limitations. But with generative AI, the disagreement among even AI scholars is striking. A recent leak of a Google researcher’s memo suggesting that early GenAI pioneers had “no moat” sparked a fiery debate about the very nature of AI.

Just a few months ago, the trajectory of AI had seemed to parallel previous trends like the internet, cloud and mobile technology. Overhyped by some and dismissed as “old news” by others, AI has had diverse effects on fields like healthcare, automotive and retail. But the game-changing impact of interacting with an AI that seems to comprehend and respond intelligently has led to unprecedented user adoption; OpenAI attracted 100 million users within two months. This has, in turn, ignited a frenzy of both zealous endorsements and vehement rebuttals.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

Undoubtedly, it’s now evident that generative AI is set to bring about significant changes across enterprises at a pace that far outstrips previous technological shifts. As CIOs and other technology executives grapple with aligning their strategies with this unpredictable yet influential trend, a few guidelines can help steer them through these evolving currents.

Create opportunities for AI experimentation

Understanding AI’s potential can be overwhelming due to its expansive capabilities. To simplify this, focus on encouraging experimentation in concrete, manageable areas. Encourage the use of AI in areas like marketing, customer service and other more straightforward applications. Prototype and pilot internally ahead of defining complete solutions or working through every exception case (that is, workflows to manage AI hallucinations).

Avoid lock-in, but buy to learn

The speed of adoption of generative AI means that entering into long-term contracts with solution providers carries more risk than ever. Traditional category leaders in HR, finance, sales, support, marketing and R&D could face a seismic shift due to the transformative potential of AI. In fact, our very definitions of these categories may undergo a complete metamorphosis. Therefore, vendor relationships should be flexible due to the potentially catastrophic cost of locking in solutions that do not evolve.

That said, the most effective solutions often come from those with deep domain expertise. A select group of these providers will seize the opportunities presented by AI in agile and inventive ways, yielding returns far beyond those typically associated with the implementation of enterprise applications. Engaging with potential revolutionaries can address immediate practical needs within your company and illuminate the broad patterns of AI’s potential impact.

Current market-leading applications may not be able to pivot fast enough, so expect to see a wave of startups launched by veterans who’ve left their motherships.

Enable human + AI systems

Large language models (LLMs) will upend sectors like customer support that rely on humans to provide answers to questions. Therefore, incorporating human + AI systems will provide key benefits now and will create data for further improvement. Reinforcement learning from human feedback (RLHF) has been core to the acceleration of these models’ advancements and will be critical to how well and how quickly such systems adapt to and impact business. Systems that produce data that can power future AI systems will create an asset to increase the pace of creation of ever more automated models and functions.

This time, believe in a hybrid strategy

With cloud computing, I ridiculed hybrid on-premise and cloud strategies as mere cloud washing; they were feeble attempts by traditional vendors to maintain their relevance in a rapidly evolving landscape. The remarkable economies of scale and the pace of innovation made it clear that any applications attempting to straddle both realms were destined for obsolescence. The triumphs of Salesforce, Workday, AWS and Google, among others, firmly quashed the notion that a hybrid model would be the industry’s dominant paradigm.

As we embark on the era of generative AI, the diversity of opinions amongst the deepest experts, coupled with the transformative potential of information, signals that it may be premature, even perilous, to entrust the entirety of our efforts to public providers or any one strategy.

With cloud applications, the shift was straightforward: We relocated the environment in which the technology operated. We didn’t provide our cloud providers with unbounded access to sales figures and financial metrics within those applications. In contrast, with AI, information becomes the product itself. Every AI solution thirsts for data and requires it to evolve and advance.

The struggle between public and private AI solutions will be highly contingent on the context and the technical evolution of model architectures. Business and commercial efforts, combined with the importance of real and perceived progress, justify public consumption and partnerships, but in most cases, the gen AI future will be hybrid — a mix of public and private systems.

Validate the limitations of AI — repeatedly

The generative AI capable of crafting an essay, creating a presentation or setting up a website about your new product differs significantly from the predictive AI technology driving autonomous vehicles or diagnosing cancer via X-rays. How you define and approach the problem is a critical first step that requires an understanding of the scope of capabilities that various AI approaches offer.

Consider this example. If your company is trying to leverage past production data to predict your ability to meet next quarter’s demand, you obtain structured data as inputs and a clear target to assess the quality of the prediction. Conversely, you might task an LLM with analyzing company emails and producing a two-page memo on the likelihood of meeting this quarter’s demand. These approaches seem to serve a similar purpose but are fundamentally distinct in nature.

The personification of AI makes it more relatable, engaging or even contentious. This can add value, facilitating tasks that reliable predictions alone may not be able to tackle. For instance, asking the AI to construct an argument for why a prediction may or may not eventuate can stimulate fresh perspectives on questions with minimal effort. However, it should not be applied or interpreted in the same manner as predictive AI models.

It’s also important to anticipate that these boundaries may shift. The generative AI of the future may very well draft the first — or final — versions of the predictive models you’ll use for your production planning.

Demand that leadership iterate and learn together

In crisis or fast-moving situations, leadership is paramount. Experts will be needed, but hiring a management consultancy to create a moment-in-time AI impact study for your firm is more likely to reduce your ability to navigate this change than to prepare you for it.

Because AI is evolving so quickly, it’s attracting a lot more attention than most new technologies. Even for companies in industries outside of high tech, C-suite executives are regularly seeing AI demos and reading about generative AI in the press. Make sure you regularly update your C-suite about new developments and potential impacts on core functions and business strategies so they connect the right dots. Use demos and prototyping to show concrete relevance to your needs.

Meanwhile, CEOs should drive this level of engagement from their technology leaders, not just to scale learning across the organization, but to assess the efficacy of their leadership. This collective and iterative learning approach is a compass to navigate the dynamic and potentially disruptive landscape of AI.

Conclusion

For centuries, the quest for human flight remained grounded as inventors fixated on mimicking the flapping-wing designs of birds. The tide turned with the Wright brothers, who reframed the problem, concentrating on fixed-wing designs and the principles of lift and control rather than replicating bird flight. This paradigm shift propelled the first successful human flight.

In the realm of AI, a similar reframing is vital for each industry and function. Companies that perceive AI as a dynamic field ripe for exploration, discovery and adaptation will find their ambitions taking flight. Those who approach it with strategies that worked earlier platform shifts (cloud, mobile) will be forced to watch the evolution of their industries from the ground.

Narinder Singh was a cofounder of Appirio and is currently the CEO at LookDeep Health.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Narinder Singh, LookDeep Health
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!