AI & RoboticsNews

Why generative AI is a double-edged sword for the cybersecurity sector

Much has been made of the potential for generative AI and large language models (LLMs) to upend the security industry. On the one hand, the positive impact is hard to ignore. These new tools may be able to help write and scan code, supplement understaffed teams, analyze threats in real time, and perform a wide range of other functions to help make security teams more accurate, efficient and productive. In time, these tools may also be able to take over the mundane and repetitive tasks that today’s security analysts dread, freeing them up for the more engaging and impactful work that demands human attention and decision-making. 

On the other hand, generative AI and LLMs are still in their relative infancy — which means organizations are still grappling with how to use them responsibly. On top of that, security professionals aren’t the only ones who recognize the potential of generative AI. What’s good for security professionals is often good for attackers as well, and today’s adversaries are exploring ways to use generative AI for their own nefarious purposes. What happens when something we think is helping us begins hurting us? Will we eventually reach a tipping point where the technology’s potential as a threat eclipses its potential as a resource?

Understanding the capabilities of generative AI and how to use it responsibly will be critical as the technology grows both more advanced and more commonplace. 

It’s no overstatement to say that generative AI models like ChatGPT may fundamentally change the way we approach programming and coding. True, they are not capable of creating code completely from scratch (at least not yet). But if you have an idea for an application or program, there’s a good chance gen AI can help you execute it. It’s helpful to think of such code as a first draft. It may not be perfect, but it’s a useful starting point. And it’s a lot easier (not to mention faster) to edit existing code than to generate it from scratch. Handing these base-level tasks off to a capable AI means engineers and developers are free to engage in tasks more befitting of their experience and expertise. 

That being said, gen AI and LLMs create output based on existing content, whether that comes from the open internet or the specific datasets that they have been trained on. That means they are good at iterating on what came before, which can be a boon for attackers. For example, in the same way that AI can create iterations of content using the same set of words, it can create malicious code that is similar to something that already exists, but different enough to evade detection. With this technology, bad actors will generate unique payloads or attacks designed to evade security defenses that are built around known attack signatures.

One way attackers are already doing this is by using AI to develop webshell variants, malicious code used to maintain persistence on compromised servers. Attackers can input the existing webshell into a generative AI tool and ask it to create iterations of the malicious code. These variants can then be used, often in conjunction with a remote code execution vulnerability (RCE), on a compromised server to evade detection. 

Well-financed attackers are also good at reading and scanning source code to identify exploits, but this process is time-intensive and requires a high level of skill. LLMs and generative AI tools can help such attackers, and even those less skilled, discover and carry out sophisticated exploits by analyzing the source code of commonly used open-source projects or by reverse engineering commercial off-the-shelf software.  

In most cases, attackers have tools or plugins written to automate this process. They’re also more likely to use open-source LLMs, as these don’t have the same protection mechanisms in place to prevent this type of malicious behavior and are typically free to use. The result will be an explosion in the number of zero-day hacks and other dangerous exploits, similar to the MOVEit and Log4Shell vulnerabilities that enabled attackers to exfiltrate data from vulnerable organizations. 

Unfortunately, the average organization already has tens or even hundreds of thousands of unresolved vulnerabilities lurking in their code bases. As programmers introduce AI-generated code without scanning it for vulnerabilities, we’ll see this number rise due to poor coding practices. Naturally, nation-state attackers and other advanced groups will be ready to take advantage, and generative AI tools will make it easier for them to do so.  

There are no easy solutions to this problem, but there are steps organizations can take to ensure they are using these new tools in a safe and responsible way. One way to do that is to do exactly what attackers are doing: By using AI tools to scan for potential vulnerabilities in their code bases, organizations can identify potentially exploitative aspects of their code and remediate them before attackers can strike. This is particularly important for organizations looking to use gen AI tools and LLMs to assist in code generation. If an AI pulls in open-source code from an existing repository, it’s critical to verify that it isn’t bringing known security vulnerabilities with it. 

The concerns today’s security professionals have regarding the use and proliferation of generative AI and LLMs are very real — a fact underscored by a group of tech leaders recently urging an “AI pause” due to the perceived societal risk. And while these tools have the potential to make engineers and developers significantly more productive, it is essential that today’s organizations approach their use in a carefully considered manner, implementing the necessary safeguards before letting AI off its metaphorical leash. 

Peter Klimek is the director of technology within the Office of the CTO at Imperva.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Much has been made of the potential for generative AI and large language models (LLMs) to upend the security industry. On the one hand, the positive impact is hard to ignore. These new tools may be able to help write and scan code, supplement understaffed teams, analyze threats in real time, and perform a wide range of other functions to help make security teams more accurate, efficient and productive. In time, these tools may also be able to take over the mundane and repetitive tasks that today’s security analysts dread, freeing them up for the more engaging and impactful work that demands human attention and decision-making. 

On the other hand, generative AI and LLMs are still in their relative infancy — which means organizations are still grappling with how to use them responsibly. On top of that, security professionals aren’t the only ones who recognize the potential of generative AI. What’s good for security professionals is often good for attackers as well, and today’s adversaries are exploring ways to use generative AI for their own nefarious purposes. What happens when something we think is helping us begins hurting us? Will we eventually reach a tipping point where the technology’s potential as a threat eclipses its potential as a resource?

Understanding the capabilities of generative AI and how to use it responsibly will be critical as the technology grows both more advanced and more commonplace. 

Using generative AI and LLMs 

It’s no overstatement to say that generative AI models like ChatGPT may fundamentally change the way we approach programming and coding. True, they are not capable of creating code completely from scratch (at least not yet). But if you have an idea for an application or program, there’s a good chance gen AI can help you execute it. It’s helpful to think of such code as a first draft. It may not be perfect, but it’s a useful starting point. And it’s a lot easier (not to mention faster) to edit existing code than to generate it from scratch. Handing these base-level tasks off to a capable AI means engineers and developers are free to engage in tasks more befitting of their experience and expertise. 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

That being said, gen AI and LLMs create output based on existing content, whether that comes from the open internet or the specific datasets that they have been trained on. That means they are good at iterating on what came before, which can be a boon for attackers. For example, in the same way that AI can create iterations of content using the same set of words, it can create malicious code that is similar to something that already exists, but different enough to evade detection. With this technology, bad actors will generate unique payloads or attacks designed to evade security defenses that are built around known attack signatures.

One way attackers are already doing this is by using AI to develop webshell variants, malicious code used to maintain persistence on compromised servers. Attackers can input the existing webshell into a generative AI tool and ask it to create iterations of the malicious code. These variants can then be used, often in conjunction with a remote code execution vulnerability (RCE), on a compromised server to evade detection. 

LLMs and AI give way to more zero-day vulnerabilities and sophisticated exploits 

Well-financed attackers are also good at reading and scanning source code to identify exploits, but this process is time-intensive and requires a high level of skill. LLMs and generative AI tools can help such attackers, and even those less skilled, discover and carry out sophisticated exploits by analyzing the source code of commonly used open-source projects or by reverse engineering commercial off-the-shelf software.  

In most cases, attackers have tools or plugins written to automate this process. They’re also more likely to use open-source LLMs, as these don’t have the same protection mechanisms in place to prevent this type of malicious behavior and are typically free to use. The result will be an explosion in the number of zero-day hacks and other dangerous exploits, similar to the MOVEit and Log4Shell vulnerabilities that enabled attackers to exfiltrate data from vulnerable organizations. 

Unfortunately, the average organization already has tens or even hundreds of thousands of unresolved vulnerabilities lurking in their code bases. As programmers introduce AI-generated code without scanning it for vulnerabilities, we’ll see this number rise due to poor coding practices. Naturally, nation-state attackers and other advanced groups will be ready to take advantage, and generative AI tools will make it easier for them to do so.  

Cautiously moving forward 

There are no easy solutions to this problem, but there are steps organizations can take to ensure they are using these new tools in a safe and responsible way. One way to do that is to do exactly what attackers are doing: By using AI tools to scan for potential vulnerabilities in their code bases, organizations can identify potentially exploitative aspects of their code and remediate them before attackers can strike. This is particularly important for organizations looking to use gen AI tools and LLMs to assist in code generation. If an AI pulls in open-source code from an existing repository, it’s critical to verify that it isn’t bringing known security vulnerabilities with it. 

The concerns today’s security professionals have regarding the use and proliferation of generative AI and LLMs are very real — a fact underscored by a group of tech leaders recently urging an “AI pause” due to the perceived societal risk. And while these tools have the potential to make engineers and developers significantly more productive, it is essential that today’s organizations approach their use in a carefully considered manner, implementing the necessary safeguards before letting AI off its metaphorical leash. 

Peter Klimek is the director of technology within the Office of the CTO at Imperva.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Peter Klimek, Imperva
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!