AI & RoboticsNews

How FraudGPT presages the future of weaponized AI

FraudGPT, a new subscription-based generative AI tool for crafting malicious cyberattacks, signals a new era of attack tradecraft. Discovered by Netenrich’s threat research team in July 2023 circulating on the dark web’s Telegram channels, it has the potential to democratize weaponized generative AI at scale.

Designed to automate everything from writing malicious code and creating undetectable malware to writing convincing phishing emails, FraudGPT puts advanced attack methods in the hands of inexperienced attackers. 

Leading cybersecurity vendors including CrowdStrike, IBM Security, Ivanti, Palo Alto Networks and Zscaler have warned that attackers, including state-sponsored cyberterrorist units, began weaponizing generative AI even before ChatGPT was released in late November 2022.

VentureBeat recently interviewed Sven Krasser, chief scientist and senior vice president at CrowdStrike, about how attackers are speeding up efforts to weaponize LLMs and generative AI. Krasser noted that cybercriminals are adopting LLM technology for phishing and malware, but that “while this increases the speed and the volume of attacks that an adversary can mount, it does not significantly change the quality of attacks.”   

Krasser says that the weaponization of AI illustrates why “cloud-based security that correlates signals from across the globe using AI is also an effective defense against these new threats. Succinctly put: Generative AI is not pushing the bar any higher when it comes to these malicious techniques, but it is raising the average and making it easier for less skilled adversaries to be more effective.”

FraudGPT, a cyberattacker’s starter kit, capitalizes on proven attack tools, such as custom hacking guides, vulnerability mining and zero-day exploits. None of the tools in FraudGPT requires advanced technical expertise.

For $200 a month or $1,700 a year, FraudGPT provides subscribers a baseline level of tradecraft a beginning attacker would otherwise have to create. Capabilities include:

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesn’t reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers.

With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to the New York Times — 1,700 hackers in seven different units and 5,100 technical support personnel. 

While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing. 

As Netenrich principal threat hunter John Bambenek told VentureBeat, FraudGPT has probably been built by taking open-source AI models and removing ethical constraints that prevent misuse. While it is likely still in an early stage of development, Bambenek warns that its appearance underscores the need for continuous innovation in AI-powered defenses to counter hostile use of AI.

Given the proliferating number of generative AI-based chatbots and LLMs, red-teaming exercises are essential for understanding these technologies’ weaknesses and erecting guardrails to try to prevent them from being used to create cyberattack tools. Microsoft recently introduced a guide for customers building applications using Azure OpenAI models that provides a framework for getting started with red-teaming.  

This past week DEF CON hosted the first public generative AI red team event, partnering with AI Village, Humane Intelligence and SeedAI. Models provided by Anthropic, Cohere, Google, Hugging Face, Meta, Nvidia, OpenAI and Stability were tested on an evaluation platform developed by Scale AI. Rumman Chowdhury, cofounder of the nonprofit Humane Intelligence and co-organizer of this Generative Red Team Challenge, wrote in a recent Washington Post article on red-teaming AI chatbots and LLMs that “every time I’ve done this, I’ve seen something I didn’t expect to see, learned something I didn’t know.” 

It is crucial to red-team chatbots and get ahead of risks to ensure these nascent technologies evolve ethically instead of going rogue. “Professional red teams are trained to find weaknesses and exploit loopholes in computer systems. But with AI chatbots and image generators, the potential harms to society go beyond security flaws,” said Chowdhury.

Generative AI-based cyberattack tools are driving cybersecurity vendors and the enterprises they serve to pick up the pace and stay competitive in the arms race. As FraudGPT increases the number of cyberattackers and accelerates their development, one sure result is that identities will be even more under siege

Generative AI poses a real threat to identity-based security. It has already proven effective in impersonating CEOs with deep-fake technology and orchestrating social engineering attacks to harvest privileged access credentials using pretexting. Here are five ways FraudGPT is presaging the future of weaponized AI: 

FraudGPT demonstrates generative AI’s ability to support convincing pretexting scenarios that can mislead victims into compromising their identities and access privileges and their corporate networks. For example, attackers ask ChatGPT to write science fiction stories about how a successful social engineering or phishing strategy worked, tricking the LLMs into providing attack guidance. 

VentureBeat has learned that cybercrime gangs and nation-states routinely query ChatGPT and other LLMs in foreign languages such that the model doesn’t reject the context of a potential attack scenario as effectively as it would in English. There are groups on the dark web devoted to prompt engineering that teaches attackers how to side-step guardrails in LLMs to create social engineering attacks and supporting emails.

While it is a challenge to spot these attacks, cybersecurity leaders in AI, machine learning and generative AI stand the best chance of keeping their customers at parity in the arms race. Leading vendors with deep AI, ML and generative AI expertise include ArticWolf, Cisco, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Palo Alto Networks, Sophos and VMWare Carbon Black.

FraudGPT has proven capable of generating malicious scripts and code tailored to a specific victim’s network, endpoints and broader IT environment. Attackers just starting out can get up to speed quickly on the latest threatcraft using generative AI-based systems like FraudGPT to learn and then deploy attack scenarios. That’s why organizations must go all-in on cyber-hygiene, including protecting endpoints.

AI-generated malware can evade longstanding cybersecurity systems not designed to identify and stop this threat. Malware-free intrusion accounts for 71% of all detections indexed by CrowdStrike’s Threat Graph, further reflecting attackers’ growing sophistication even before the widespread adoption of generative AI. Recent new product and service announcements across the industry show what a high priority battling malware is. Amazon Web Services, Bitdefender, Cisco, CrowdStrike, Google, IBM, Ivanti, Microsoft and Palo Alto Networks have released AI-based platform enhancements to identify malware attack patterns and thus reduce false positives.

Generative AI will shrink the time it takes to complete manual research to find new vulnerabilities, hunt for and harvest compromised credentials, learn new hacking tools and master the skills needed to launch sophisticated cybercrime campaigns. Attackers at all skill levels will use it to discover unprotected endpoints, attack unprotected threat surfaces and launch attack campaigns based on insights gained from simple prompts. 

Along with identities, endpoints will see more attacks. CISOs tell VentureBeat that self-healing endpoints are table stakes, especially in mixed IT and operational technology (OT) environments that rely on IoT sensors. In a recent series of interviews, CISOs told VentureBeat that self-healing endpoints are also core to their consolidation strategies and essential for improving cyber-resiliency. Leading self-healing endpoint vendors with enterprise customers include Absolute SoftwareCiscoCrowdStrike, Cybereason, ESETIvantiMalwarebytesMicrosoft Defender 365Sophos and Trend Micro.  

Weaponized generative AI is still in its infancy, and FraudGPT is its baby steps. More advanced — and lethal — tools are coming. These will use generative AI to evade endpoint detection and response systems and create malware variants that can avoid static signature detection. 

Of the five factors signaling the future of weaponized AI, attackers’ ability to use generative AI to out-innovate cybersecurity vendors and enterprises is the most persistent strategic threat. That’s why interpreting behaviors, identifying anomalies based on real-time telemetry data across all cloud instances and monitoring every endpoint are table stakes.

Cybersecurity vendors must prioritize unifying endpoints and identities to protect endpoint attack surfaces. Using AI to secure identities and endpoints is essential. Many CISOs are heading toward combining an offense-driven strategy with tech consolidation to gain a more real-time, unified view of all threat surfaces while making tech stacks more efficient. Ninety-six percent of CISOs plan to consolidate their security platforms, with 63% saying extended detection and response (XDR) is their top choice for a solution.

Leading vendors providing XDR platforms include CrowdStrike, MicrosoftPalo Alto NetworksTehtris and Trend Micro. Meanwhile, EDR vendors are accelerating their product roadmaps to deliver new XDR releases to stay competitive in the growing market.

FraudGPT and future weaponized generative AI apps and tools will be designed to reduce detection and attribution to the point of anonymity. Because no hard coding is involved, security teams will struggle to attribute AI-driven attacks to a specific threat group or campaign based on forensic artifacts or evidence. More anonymity and less detection will translate into longer dwell times and allow attackers to execute “low and slow” attacks that typify advanced persistent threat (APT) attacks on high-value targets. Weaponized generative AI will make that available to every attacker eventually. 

SecOps and the security teams supporting them need to consider how they can use AI and ML to identify subtle indicators of an attack flow driven by generative AI, even if the content appears legitimate. Leading vendors who can help protect against this threat include Blackberry Security (Cylance), CrowdStrike, Darktrace, Deep Instinct, Ivanti, SentinelOne, Sift and Vectra.

FraudGPT signals the start of a new era of weaponized generative AI, where the basic tools of cyberattack are available to any attacker at any level of expertise and knowledge. With thousands of potential subscribers, including nation-states, FraudGPT’s greatest threat is how quickly it will expand the global base of attackers looking to prey on unprotected soft targets in education, health care, government and manufacturing.

With CISOs being asked to get more done with less, and many focusing on consolidating their tech stacks for greater efficacy and visibility, it’s time to think about how those dynamics can drive greater cyber-resilience. It’s time to go on the offensive with generative AI and keep pace in an entirely new, faster-moving arms race.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


FraudGPT, a new subscription-based generative AI tool for crafting malicious cyberattacks, signals a new era of attack tradecraft. Discovered by Netenrich’s threat research team in July 2023 circulating on the dark web’s Telegram channels, it has the potential to democratize weaponized generative AI at scale.

Designed to automate everything from writing malicious code and creating undetectable malware to writing convincing phishing emails, FraudGPT puts advanced attack methods in the hands of inexperienced attackers. 

Weaponized AI apps and tools are dark-web best sellers   

Leading cybersecurity vendors including CrowdStrike, IBM Security, Ivanti, Palo Alto Networks and Zscaler have warned that attackers, including state-sponsored cyberterrorist units, began weaponizing generative AI even before ChatGPT was released in late November 2022.

VentureBeat recently interviewed Sven Krasser, chief scientist and senior vice president at CrowdStrike, about how attackers are speeding up efforts to weaponize LLMs and generative AI. Krasser noted that cybercriminals are adopting LLM technology for phishing and malware, but that “while this increases the speed and the volume of attacks that an adversary can mount, it does not significantly change the quality of attacks.”   

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Krasser says that the weaponization of AI illustrates why “cloud-based security that correlates signals from across the globe using AI is also an effective defense against these new threats. Succinctly put: Generative AI is not pushing the bar any higher when it comes to these malicious techniques, but it is raising the average and making it easier for less skilled adversaries to be more effective.”

Defining FraudGPT and weaponized AI

FraudGPT, a cyberattacker’s starter kit, capitalizes on proven attack tools, such as custom hacking guides, vulnerability mining and zero-day exploits. None of the tools in FraudGPT requires advanced technical expertise.

For $200 a month or $1,700 a year, FraudGPT provides subscribers a baseline level of tradecraft a beginning attacker would otherwise have to create. Capabilities include:

  • Writing phishing emails and social engineering content
  • Creating exploits, malware and hacking tools
  • Discovering vulnerabilities, compromised credentials and cardable sites
  • Providing advice on hacking techniques and cybercrime
FraudGPT
Original advertisement for FraudGPT offers video proof of its effectiveness, an overview of its features, and the claim of over 3,000 subscriptions sold as of July 2023. Source: Netenrich blog, FraudGPT: The Villain Avatar of ChatGPT

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesn’t reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers.

With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to the New York Times — 1,700 hackers in seven different units and 5,100 technical support personnel. 

While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing. 

As Netenrich principal threat hunter John Bambenek told VentureBeat, FraudGPT has probably been built by taking open-source AI models and removing ethical constraints that prevent misuse. While it is likely still in an early stage of development, Bambenek warns that its appearance underscores the need for continuous innovation in AI-powered defenses to counter hostile use of AI.

Weaponized generative AI driving a rapid rise in red-teaming 

Given the proliferating number of generative AI-based chatbots and LLMs, red-teaming exercises are essential for understanding these technologies’ weaknesses and erecting guardrails to try to prevent them from being used to create cyberattack tools. Microsoft recently introduced a guide for customers building applications using Azure OpenAI models that provides a framework for getting started with red-teaming.  

This past week DEF CON hosted the first public generative AI red team event, partnering with AI Village, Humane Intelligence and SeedAI. Models provided by Anthropic, Cohere, Google, Hugging Face, Meta, Nvidia, OpenAI and Stability were tested on an evaluation platform developed by Scale AI. Rumman Chowdhury, cofounder of the nonprofit Humane Intelligence and co-organizer of this Generative Red Team Challenge, wrote in a recent Washington Post article on red-teaming AI chatbots and LLMs that “every time I’ve done this, I’ve seen something I didn’t expect to see, learned something I didn’t know.” 

It is crucial to red-team chatbots and get ahead of risks to ensure these nascent technologies evolve ethically instead of going rogue. “Professional red teams are trained to find weaknesses and exploit loopholes in computer systems. But with AI chatbots and image generators, the potential harms to society go beyond security flaws,” said Chowdhury.

Five ways FraudGPT presages the future of weaponized AI

Generative AI-based cyberattack tools are driving cybersecurity vendors and the enterprises they serve to pick up the pace and stay competitive in the arms race. As FraudGPT increases the number of cyberattackers and accelerates their development, one sure result is that identities will be even more under siege

Generative AI poses a real threat to identity-based security. It has already proven effective in impersonating CEOs with deep-fake technology and orchestrating social engineering attacks to harvest privileged access credentials using pretexting. Here are five ways FraudGPT is presaging the future of weaponized AI: 

1. Automated social engineering and phishing attacks

FraudGPT demonstrates generative AI’s ability to support convincing pretexting scenarios that can mislead victims into compromising their identities and access privileges and their corporate networks. For example, attackers ask ChatGPT to write science fiction stories about how a successful social engineering or phishing strategy worked, tricking the LLMs into providing attack guidance. 

VentureBeat has learned that cybercrime gangs and nation-states routinely query ChatGPT and other LLMs in foreign languages such that the model doesn’t reject the context of a potential attack scenario as effectively as it would in English. There are groups on the dark web devoted to prompt engineering that teaches attackers how to side-step guardrails in LLMs to create social engineering attacks and supporting emails.

FraudGPT
An example of how FraudGPT can be used for planning a business email compromise (BEC) phishing attack. Source: Netenrich blog, FraudGPT: The Villain Avatar of ChatGPT

While it is a challenge to spot these attacks, cybersecurity leaders in AI, machine learning and generative AI stand the best chance of keeping their customers at parity in the arms race. Leading vendors with deep AI, ML and generative AI expertise include ArticWolf, Cisco, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Palo Alto Networks, Sophos and VMWare Carbon Black.

2. AI-generated malware and exploits

FraudGPT has proven capable of generating malicious scripts and code tailored to a specific victim’s network, endpoints and broader IT environment. Attackers just starting out can get up to speed quickly on the latest threatcraft using generative AI-based systems like FraudGPT to learn and then deploy attack scenarios. That’s why organizations must go all-in on cyber-hygiene, including protecting endpoints.

AI-generated malware can evade longstanding cybersecurity systems not designed to identify and stop this threat. Malware-free intrusion accounts for 71% of all detections indexed by CrowdStrike’s Threat Graph, further reflecting attackers’ growing sophistication even before the widespread adoption of generative AI. Recent new product and service announcements across the industry show what a high priority battling malware is. Amazon Web Services, Bitdefender, Cisco, CrowdStrike, Google, IBM, Ivanti, Microsoft and Palo Alto Networks have released AI-based platform enhancements to identify malware attack patterns and thus reduce false positives.

3. Automated discovery of cybercrime resources

Generative AI will shrink the time it takes to complete manual research to find new vulnerabilities, hunt for and harvest compromised credentials, learn new hacking tools and master the skills needed to launch sophisticated cybercrime campaigns. Attackers at all skill levels will use it to discover unprotected endpoints, attack unprotected threat surfaces and launch attack campaigns based on insights gained from simple prompts. 

Along with identities, endpoints will see more attacks. CISOs tell VentureBeat that self-healing endpoints are table stakes, especially in mixed IT and operational technology (OT) environments that rely on IoT sensors. In a recent series of interviews, CISOs told VentureBeat that self-healing endpoints are also core to their consolidation strategies and essential for improving cyber-resiliency. Leading self-healing endpoint vendors with enterprise customers include Absolute SoftwareCiscoCrowdStrike, Cybereason, ESETIvantiMalwarebytesMicrosoft Defender 365Sophos and Trend Micro.  

4. AI-driven evasion of defenses is just starting, and we haven’t seen anything yet

Weaponized generative AI is still in its infancy, and FraudGPT is its baby steps. More advanced — and lethal — tools are coming. These will use generative AI to evade endpoint detection and response systems and create malware variants that can avoid static signature detection. 

Of the five factors signaling the future of weaponized AI, attackers’ ability to use generative AI to out-innovate cybersecurity vendors and enterprises is the most persistent strategic threat. That’s why interpreting behaviors, identifying anomalies based on real-time telemetry data across all cloud instances and monitoring every endpoint are table stakes.

Cybersecurity vendors must prioritize unifying endpoints and identities to protect endpoint attack surfaces. Using AI to secure identities and endpoints is essential. Many CISOs are heading toward combining an offense-driven strategy with tech consolidation to gain a more real-time, unified view of all threat surfaces while making tech stacks more efficient. Ninety-six percent of CISOs plan to consolidate their security platforms, with 63% saying extended detection and response (XDR) is their top choice for a solution.

Leading vendors providing XDR platforms include CrowdStrike, MicrosoftPalo Alto NetworksTehtris and Trend Micro. Meanwhile, EDR vendors are accelerating their product roadmaps to deliver new XDR releases to stay competitive in the growing market.

5. Difficulty of detection and attribution

FraudGPT and future weaponized generative AI apps and tools will be designed to reduce detection and attribution to the point of anonymity. Because no hard coding is involved, security teams will struggle to attribute AI-driven attacks to a specific threat group or campaign based on forensic artifacts or evidence. More anonymity and less detection will translate into longer dwell times and allow attackers to execute “low and slow” attacks that typify advanced persistent threat (APT) attacks on high-value targets. Weaponized generative AI will make that available to every attacker eventually. 

SecOps and the security teams supporting them need to consider how they can use AI and ML to identify subtle indicators of an attack flow driven by generative AI, even if the content appears legitimate. Leading vendors who can help protect against this threat include Blackberry Security (Cylance), CrowdStrike, Darktrace, Deep Instinct, Ivanti, SentinelOne, Sift and Vectra.

Welcome to the new AI arms race 

FraudGPT signals the start of a new era of weaponized generative AI, where the basic tools of cyberattack are available to any attacker at any level of expertise and knowledge. With thousands of potential subscribers, including nation-states, FraudGPT’s greatest threat is how quickly it will expand the global base of attackers looking to prey on unprotected soft targets in education, health care, government and manufacturing.

With CISOs being asked to get more done with less, and many focusing on consolidating their tech stacks for greater efficacy and visibility, it’s time to think about how those dynamics can drive greater cyber-resilience. It’s time to go on the offensive with generative AI and keep pace in an entirely new, faster-moving arms race.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Louis Columbus
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!