With generative AI tools like ChatGPT proliferating across enterprises, CISOs have to strike a very difficult balance: Performance gains versus unknown risks. Gen AI is delivering greater precision to cybersecurity but also being weaponized into new attack tools such as FraudGPT that advertise their ease of use for the next generation of attackers.
Solving the question of performance versus risk is proving a growth catalyst for cybersecurity spending. The market value of gen AI-based cybersecurity platforms, systems and solutions is expected to rise to $11.2 billion in 2032 from $1.6 billion in 2022. Canalys expects generative AI to support more than 70% of businesses’ cybersecurity operations within five years.
Gen AI attack strategies are focused on getting control of identities first. According to Gartner, human error in managing access privileges and identities caused 75% of security failures, up from 50% two years ago. Using gen AI to force human errors is one of the goals of attackers.
VentureBeat interviewed Michael Sentonas, president of CrowdStrike, to gain insights into how the cybersecurity leader is helping its customers take on the challenges of new, more lethal attacks that defy existing detection and response technologies.
Sentonas said that “the hacking [demo] session that [we] did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has.”
Leading cybersecurity vendors are up for the challenge of fast-tracking gen AI apps through DevOps to beta and doubling down on their many models in development.
During Palo Alto Networks‘ most recent earnings call, chairman and CEO Nikesh Arora emphasized the intensity the company is putting into gen AI, saying, “we’re doubling down, we’re quadrupling down to make sure that precision AI is deployed across every product. And we open up the floodgates of collecting good data with our customers for them to give them better security because we think that is the way we’re going to solve this problem to get real-time security.”
For CISOs and their teams to win the war against AI attacks and threats, gen AI-based apps, tools and platforms must become part of their arsenals. Attackers are out-innovating the most adaptive enterprises, sharpening their tradecraft to penetrate the weakest attack vectors. What’s needed is greater cyber-resilience and self-healing endpoints.
Absolute Software’s 2023 Resilience Index reveals how challenging it is to excel at the comply-to-connect trend. Balancing security and cyber-resilience is the goal, and the Index provides a useful roadmap. Cyber-resilience, like zero trust, is an ongoing framework that adapts to an organization’s changing needs.
Every CEO and CISO VentureBeat interviewed at RSAC 2023 said employee- and company-owned endpoint devices are the fastest-moving, hardest-to-protect threat surfaces. With the rising risk of gen AI-based attacks, resilient, self-healing endpoints that can regenerate operating systems and configurations are the future of endpoint security.
Central to being prepared for gen AI-based attacks is to create muscle memory of every breach or intrusion attempt at scale, using AI and machine learning (ML) algorithms that learn from every intrusion attempt. Here are the five ways CISOs and their teams are preparing for gen AI-based attacks.
Despite the security risk of confidential data being leaked into LLMs, organizations are intrigued by boosting productivity with gen AI and ChatGPT. VentureBeat’s interviews with CISOs reveal that these professionals are split on defining AI governance. For any solution to this problem to work, it must secure access at the browser, app and API levels to be effective.
Several startups and larger cybersecurity vendors are working on solutions in this area. Nightfall AI’s recent announcement of an innovative security protocol is noteworthy. The company’s customizable data rules and remediation insights help users self-correct. The platform gives CISOs visibility and control so they can use AI while ensuring data security.
SOC teams are seeing more sophisticated social engineering, phishing, malware and business email compromise (BEC) attacks that they attribute to gen AI. While attacks on LLMs and AI apps are nascent today, CISOs are already doubling down on zero trust to reduce these risks.
That includes continuously monitoring and analyzing gen AI traffic patterns to detect anomalies that could indicate emerging attacks and regularly testing and red-teaming systems in development to uncover potential vulnerabilities. While zero trust can’t eliminate all risks, it can help make organizations more resilient against gen AI threats.
Gen AI’s potential to improve microsegmentation, a cornerstone of zero trust, is already happening thanks to startups’ ingenuity. Nearly every microsegmentation provider is fast-tracking DevOps efforts.
Leading vendors with deep AI and ML expertise include Akamai, Airgap Networks, AlgoSec, Cisco, ColorTokens, Elisity, Fortinet, Illumio, Microsoft Azure, Onclave Networks, Palo Alto Networks, VMware, Zero Networks and Zscaler.
One of the most innovative startups in microsegmentation is Airgap Networks, named one of the 20 best zero-trust startups of 2023. Airgap’s approach to agentless microsegmentation reduces the attack surface of every network endpoint, and it is possible to segment every endpoint across an enterprise while integrating the solution into an existing network with no device changes, downtime or hardware upgrades.
Airgap Networks also introduced its Zero Trust Firewall (ZTFW) with ThreatGPT, which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships.
“With highly accurate asset discovery, agentless microsegmentation and secure access, Airgap offers a wealth of intelligence to combat evolving threats,” Airgap CEO Ritesh Agrawal told VentureBeat. “What customers need now is an easy way to harness that power without any programming. And that’s the beauty of ThreatGPT — the sheer data-mining intelligence of AI coupled with an easy, natural language interface. It’s a game-changer for security teams.”
Security is often tested right before deployment, at the end of the software development lifecycle (SDLC). In an era of emerging gen AI threats, security must be pervasive throughout the SDLC, with continuous testing and verification. API security must also be a priority, and API testing and security monitoring should be automated in all DevOps pipelines.
While not foolproof against new gen AI threats, these practices significantly raise the barrier and enable quick threat detection. Integrating security across the SDLC and improving API defenses will help enterprises thwart AI-powered threats.
A zero-trust approach to every interaction with AI tools, apps and platforms and the endpoints they rely on is a must-have in any CISO’s playbook. Continuous monitoring and dynamic access controls must be in place to provide the granular visibility needed to enforce least privilege access and always-on verification of users, devices and the data they’re using, both at rest and in transit.
CISOs are most worried about how gen AI will bring new attack vectors they’re unprepared to protect against. For enterprises LLMs, protecting against query attacks, prompt injections, model manipulation and data poisoning are high priorities.
CISOs, CIOs and their teams are facing a challenging problem today. Do gen AI tools like ChatGPT get free reign in their organizations to deliver greater productivity, or are they bridled in and controlled, and if so, by how much? Samsung’s failure to protect IP is still fresh in the minds of many board members.
One thing everyone agrees on, from the board level to SOC teams, is that gen AI-based attacks are increasing. Yet no board wants to jump into capital expense budgeting, especially given inflation and rising interest rates. The answer many are arriving at is accelerating zero-trust initiatives. While an effective zero-trust framework isn’t stopping gen AI attacks completely, it can help reduce their blast radius and establish a first line of defense in protecting identities and privileged access credentials.
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
With generative AI tools like ChatGPT proliferating across enterprises, CISOs have to strike a very difficult balance: Performance gains versus unknown risks. Gen AI is delivering greater precision to cybersecurity but also being weaponized into new attack tools such as FraudGPT that advertise their ease of use for the next generation of attackers.
Solving the question of performance versus risk is proving a growth catalyst for cybersecurity spending. The market value of gen AI-based cybersecurity platforms, systems and solutions is expected to rise to $11.2 billion in 2032 from $1.6 billion in 2022. Canalys expects generative AI to support more than 70% of businesses’ cybersecurity operations within five years.
Weaponized AI strikes at the core of identity security
Gen AI attack strategies are focused on getting control of identities first. According to Gartner, human error in managing access privileges and identities caused 75% of security failures, up from 50% two years ago. Using gen AI to force human errors is one of the goals of attackers.
VentureBeat interviewed Michael Sentonas, president of CrowdStrike, to gain insights into how the cybersecurity leader is helping its customers take on the challenges of new, more lethal attacks that defy existing detection and response technologies.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Sentonas said that “the hacking [demo] session that [we] did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has.”
Cybersecurity leaders are up for the challenge
Leading cybersecurity vendors are up for the challenge of fast-tracking gen AI apps through DevOps to beta and doubling down on their many models in development.
During Palo Alto Networks‘ most recent earnings call, chairman and CEO Nikesh Arora emphasized the intensity the company is putting into gen AI, saying, “we’re doubling down, we’re quadrupling down to make sure that precision AI is deployed across every product. And we open up the floodgates of collecting good data with our customers for them to give them better security because we think that is the way we’re going to solve this problem to get real-time security.”
Toward resilience against AI-based threats
For CISOs and their teams to win the war against AI attacks and threats, gen AI-based apps, tools and platforms must become part of their arsenals. Attackers are out-innovating the most adaptive enterprises, sharpening their tradecraft to penetrate the weakest attack vectors. What’s needed is greater cyber-resilience and self-healing endpoints.
Absolute Software’s 2023 Resilience Index reveals how challenging it is to excel at the comply-to-connect trend. Balancing security and cyber-resilience is the goal, and the Index provides a useful roadmap. Cyber-resilience, like zero trust, is an ongoing framework that adapts to an organization’s changing needs.
Every CEO and CISO VentureBeat interviewed at RSAC 2023 said employee- and company-owned endpoint devices are the fastest-moving, hardest-to-protect threat surfaces. With the rising risk of gen AI-based attacks, resilient, self-healing endpoints that can regenerate operating systems and configurations are the future of endpoint security.
Five ways CISOs and their teams can prepare
Central to being prepared for gen AI-based attacks is to create muscle memory of every breach or intrusion attempt at scale, using AI and machine learning (ML) algorithms that learn from every intrusion attempt. Here are the five ways CISOs and their teams are preparing for gen AI-based attacks.
Securing generative AI and ChatGPT sessions in the browser
Despite the security risk of confidential data being leaked into LLMs, organizations are intrigued by boosting productivity with gen AI and ChatGPT. VentureBeat’s interviews with CISOs reveal that these professionals are split on defining AI governance. For any solution to this problem to work, it must secure access at the browser, app and API levels to be effective.
Several startups and larger cybersecurity vendors are working on solutions in this area. Nightfall AI’s recent announcement of an innovative security protocol is noteworthy. The company’s customizable data rules and remediation insights help users self-correct. The platform gives CISOs visibility and control so they can use AI while ensuring data security.
Always scanning for new attack vectors and types of compromise
SOC teams are seeing more sophisticated social engineering, phishing, malware and business email compromise (BEC) attacks that they attribute to gen AI. While attacks on LLMs and AI apps are nascent today, CISOs are already doubling down on zero trust to reduce these risks.
That includes continuously monitoring and analyzing gen AI traffic patterns to detect anomalies that could indicate emerging attacks and regularly testing and red-teaming systems in development to uncover potential vulnerabilities. While zero trust can’t eliminate all risks, it can help make organizations more resilient against gen AI threats.
Finding and closing gaps and errors in microsegmentation
Gen AI’s potential to improve microsegmentation, a cornerstone of zero trust, is already happening thanks to startups’ ingenuity. Nearly every microsegmentation provider is fast-tracking DevOps efforts.
Leading vendors with deep AI and ML expertise include Akamai, Airgap Networks, AlgoSec, Cisco, ColorTokens, Elisity, Fortinet, Illumio, Microsoft Azure, Onclave Networks, Palo Alto Networks, VMware, Zero Networks and Zscaler.
One of the most innovative startups in microsegmentation is Airgap Networks, named one of the 20 best zero-trust startups of 2023. Airgap’s approach to agentless microsegmentation reduces the attack surface of every network endpoint, and it is possible to segment every endpoint across an enterprise while integrating the solution into an existing network with no device changes, downtime or hardware upgrades.
Airgap Networks also introduced its Zero Trust Firewall (ZTFW) with ThreatGPT, which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships.
“With highly accurate asset discovery, agentless microsegmentation and secure access, Airgap offers a wealth of intelligence to combat evolving threats,” Airgap CEO Ritesh Agrawal told VentureBeat. “What customers need now is an easy way to harness that power without any programming. And that’s the beauty of ThreatGPT — the sheer data-mining intelligence of AI coupled with an easy, natural language interface. It’s a game-changer for security teams.”
Guarding against generative AI-based supply chain attacks
Security is often tested right before deployment, at the end of the software development lifecycle (SDLC). In an era of emerging gen AI threats, security must be pervasive throughout the SDLC, with continuous testing and verification. API security must also be a priority, and API testing and security monitoring should be automated in all DevOps pipelines.
While not foolproof against new gen AI threats, these practices significantly raise the barrier and enable quick threat detection. Integrating security across the SDLC and improving API defenses will help enterprises thwart AI-powered threats.
Taking a zero-trust approach to every generative AI app, platform, tool and endpoint
A zero-trust approach to every interaction with AI tools, apps and platforms and the endpoints they rely on is a must-have in any CISO’s playbook. Continuous monitoring and dynamic access controls must be in place to provide the granular visibility needed to enforce least privilege access and always-on verification of users, devices and the data they’re using, both at rest and in transit.
CISOs are most worried about how gen AI will bring new attack vectors they’re unprepared to protect against. For enterprises LLMs, protecting against query attacks, prompt injections, model manipulation and data poisoning are high priorities.
Preparing for generative AI attacks with zero trust
CISOs, CIOs and their teams are facing a challenging problem today. Do gen AI tools like ChatGPT get free reign in their organizations to deliver greater productivity, or are they bridled in and controlled, and if so, by how much? Samsung’s failure to protect IP is still fresh in the minds of many board members.
One thing everyone agrees on, from the board level to SOC teams, is that gen AI-based attacks are increasing. Yet no board wants to jump into capital expense budgeting, especially given inflation and rising interest rates. The answer many are arriving at is accelerating zero-trust initiatives. While an effective zero-trust framework isn’t stopping gen AI attacks completely, it can help reduce their blast radius and establish a first line of defense in protecting identities and privileged access credentials.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Louis Columbus
Source: Venturebeat
Reviewed By: Editorial Team