Generative AI is a powerful technology that can create new content, insights and solutions from data. But how can businesses leverage it to gain a competitive edge and accelerate their growth? Matt Wood, VP of product at AWS, shared his insights on how generative AI can create a flywheel effect for business growth in a recent interview with VentureBeat.
Wood said that generative AI can be applied to four major buckets of use cases. The first three are relatively well known and are already being implemented by many businesses. These are generative interfaces, search ranking and relevance and knowledge discovery.
The last use case bucket is automated decision support systems. This is the hardest, but the most interesting and impactful one, he said, since it can enable businesses to solve complex problems with the help of autonomous intelligent systems.
And, it’s what companies can build a flywheel around. When done correctly, the flywheel can create a huge advantage against competitors, said Wood.
The AWS VP will be speaking at VB Transform 2023 next week in San Francisco, a networking event for technical executives seeking to understand and implement generative AI. I’ll be moderating a panel where Wood will be joined by Gerrit Kazmaier, VP and GM for data and analytics at Google — where the two execs will be talking more about the impact of large language models (LLMs) for enterprise leaders, and we’ll likely go deeper on this flywheel concept.
Cybersecurity is a good example to illustrate the flywheel potential of LLMs for other enterprises, Wood said. Let’s say you start to experience a set of threats emerging in your application. These threats have subtle signals, because they’re split across multiple services and architectures. But just in a few places, you just start to see very subtle signals of a cyber attack.
By using embeddings, which can find correlations between data points, LLMs are good at finding subtle differences and effectively correlating them into a larger signal.
“So what would otherwise be split across a diluted surface area now stands out like a flashing siren,” said Wood.
Going deeper with this example, LLMs also let you automatically investigate the root cause of that attack, providing an explanation of why it’s happening in natural language. And from here, LLMs can let you know the specifics of what is being threatened, then suggest how to defend against it, said Wood.
Finally, once you’ve reviewed the suggestion and you’re happy with it, you can just click a button and the LLM system will execute the code to remediate the attack or vulnerability or operational problem — whatever it might be.
“Compare that to the level of human investment and high-judgment decisions that would need to be made today in order to get to that level of specificity,” said Wood. “And just, you know, going and finding all those log entries and then figuring out the attack vectors and then figuring out what to do, takes a remarkable amount of skill, a remarkable amount of time.”
He added: “Imagine all of that is happening all the time, automatically under the hood. And what you’re presented with is not a random set of ones and zeros that are operating slightly unusually, you’re presented with a full incident report, as if it was created by a set of humans, which you can interact with, and fine tune and revise.”
Generative AI can also create a feedback loop that improves the performance of the system over time.
“If you take the feedback from these sorts of interactions, the improvements you would make to a threat report and the remediation, for example, then if you bake those into the large language model, the language model will perform better, and you’ll get more users,” said Wood. “If you get more users, you’ll get more feedback. If you get more feedback, you’ll get an improved model. If you get a better model, you get more feedback.”
All of your interactions make the threat report better for the next time. And so that’s the flywheel that organizations can spin. “Flywheels are a very rare technology as it turns out, but there is a real flywheel here with generative AI,” said Wood.
He added: “The earlier you can spin that as an organization and the faster you can spin it, you’ll be able to create much more intelligence, much more automation, much more accuracy, much less hallucination as you go, and at some point, if you can spin that flywheel early enough and quickly enough, then you’ll have this enormous gap against your competitors, and competitors won’t be able to catch up at any cost because that’s how valuable the flywheel is.”
Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More
Generative AI is a powerful technology that can create new content, insights and solutions from data. But how can businesses leverage it to gain a competitive edge and accelerate their growth? Matt Wood, VP of product at AWS, shared his insights on how generative AI can create a flywheel effect for business growth in a recent interview with VentureBeat.
Wood said that generative AI can be applied to four major buckets of use cases. The first three are relatively well known and are already being implemented by many businesses. These are generative interfaces, search ranking and relevance and knowledge discovery.
The last use case bucket is automated decision support systems. This is the hardest, but the most interesting and impactful one, he said, since it can enable businesses to solve complex problems with the help of autonomous intelligent systems.
And, it’s what companies can build a flywheel around. When done correctly, the flywheel can create a huge advantage against competitors, said Wood.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Impacts for LLMs in enterprise
The AWS VP will be speaking at VB Transform 2023 next week in San Francisco, a networking event for technical executives seeking to understand and implement generative AI. I’ll be moderating a panel where Wood will be joined by Gerrit Kazmaier, VP and GM for data and analytics at Google — where the two execs will be talking more about the impact of large language models (LLMs) for enterprise leaders, and we’ll likely go deeper on this flywheel concept.
Cybersecurity is a good example to illustrate the flywheel potential of LLMs for other enterprises, Wood said. Let’s say you start to experience a set of threats emerging in your application. These threats have subtle signals, because they’re split across multiple services and architectures. But just in a few places, you just start to see very subtle signals of a cyber attack.
By using embeddings, which can find correlations between data points, LLMs are good at finding subtle differences and effectively correlating them into a larger signal.
“So what would otherwise be split across a diluted surface area now stands out like a flashing siren,” said Wood.
Investigating root causes of cyberattacks
Going deeper with this example, LLMs also let you automatically investigate the root cause of that attack, providing an explanation of why it’s happening in natural language. And from here, LLMs can let you know the specifics of what is being threatened, then suggest how to defend against it, said Wood.
Finally, once you’ve reviewed the suggestion and you’re happy with it, you can just click a button and the LLM system will execute the code to remediate the attack or vulnerability or operational problem — whatever it might be.
“Compare that to the level of human investment and high-judgment decisions that would need to be made today in order to get to that level of specificity,” said Wood. “And just, you know, going and finding all those log entries and then figuring out the attack vectors and then figuring out what to do, takes a remarkable amount of skill, a remarkable amount of time.”
He added: “Imagine all of that is happening all the time, automatically under the hood. And what you’re presented with is not a random set of ones and zeros that are operating slightly unusually, you’re presented with a full incident report, as if it was created by a set of humans, which you can interact with, and fine tune and revise.”
Constantly improving feedback loop
Generative AI can also create a feedback loop that improves the performance of the system over time.
“If you take the feedback from these sorts of interactions, the improvements you would make to a threat report and the remediation, for example, then if you bake those into the large language model, the language model will perform better, and you’ll get more users,” said Wood. “If you get more users, you’ll get more feedback. If you get more feedback, you’ll get an improved model. If you get a better model, you get more feedback.”
All of your interactions make the threat report better for the next time. And so that’s the flywheel that organizations can spin. “Flywheels are a very rare technology as it turns out, but there is a real flywheel here with generative AI,” said Wood.
He added: “The earlier you can spin that as an organization and the faster you can spin it, you’ll be able to create much more intelligence, much more automation, much more accuracy, much less hallucination as you go, and at some point, if you can spin that flywheel early enough and quickly enough, then you’ll have this enormous gap against your competitors, and competitors won’t be able to catch up at any cost because that’s how valuable the flywheel is.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Matt Marshall
Source: Venturebeat