AI & RoboticsNews

New study: Threat actors harness generative AI to amplify and refine email attacks

generative AI

A study conducted by email security platform Abnormal Security has revealed the growing use of generative AI, including ChatGPT, by cybercriminals to develop extremely authentic and persuasive email attacks.

The company recently performed a comprehensive analysis to assess the probability of generative AI-based novel email attacks intercepted by their platform. This investigation found that threat actors now leverage GenAI tools to craft email attacks that are becoming progressively more realistic and convincing.

Security leaders have expressed ongoing concerns about the impact of AI-generated email attacks since the emergence of ChatGPT. Abnormal Security’s analysis found that AI is now being utilized to create new attack methods, including credential phishing, an advanced version of the traditional business email compromise (BEC) scheme and vendor fraud.

According to the company, email recipients have traditionally relied on identifying typos and grammatical errors to detect phishing attacks. However, generative AI can help create flawlessly written emails that closely resemble legitimate communication. As a result, it becomes increasingly challenging for employees to distinguish between authentic and fraudulent messages.

Business email compromise (BEC) actors often use templates to write and launch their email attacks, Dan Shiebler, head of ML at Abnormal Security, told VentureBeat.

“Because of this, many traditional BEC attacks feature common or recurring content that can be detected by email security technology based on pre-set policies,” he said. “But with generative AI tools like ChatGPT, cybercriminals are writing a greater variety of unique content, based on slight differences in their generative AI prompts. This makes detection based on known attack indicator matches much more difficult while also allowing them to scale the volume of their attacks.”

Abnormal’s research further revealed that threat actors go beyond traditional BEC attacks and leverage tools similar to ChatGPT to impersonate vendors. These vendor email compromise (VEC) attacks exploit the existing trust between vendors and customers, proving highly effective social engineering techniques.

Interactions with vendors typically involve discussions related to invoices and payments, which adds an additional layer of complexity in identifying attacks that imitate these exchanges. The absence of conspicuous red flags such as typos further compounds the challenge of detection.

“While we are still doing full analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks that have AI indicators as a percentage of all attacks, particularly over the past few weeks,” Shiebler told VentureBeat.

According to Shiebler, GenAI poses a significant threat in email attacks as it enables threat actors to craft highly sophisticated content. This raises the likelihood of successfully deceiving targets into clicking malicious links or complying with their instructions. For instance, leveraging AI to compose email attacks eliminates the typographical and grammatical errors commonly associated with and used to identify traditional BEC attacks.

“It can also be used to create greater personalization,” Shiebler explained. “Imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. Emails will begin to show the typical context, language and tone that the victim expects, making BEC emails even more deceptive.”

The company noted that cybercriminals sought refuge in newly created domains a decade ago. However, security tools quickly detected and obstructed these malicious activities. In response, threat actors adjusted their tactics by utilizing free webmail accounts such as Gmail and Outlook. These domains were often linked to legitimate business operations, allowing them to evade traditional security measures.

Generative AI follows a similar path, as employees now rely on platforms like ChatGPT and Google Bard for routine business communications. Consequently, it becomes impractical to indiscriminately block all AI-generated emails.

One such attack intercepted by Abnormal involved an email purportedly sent by “Meta for Business,” notifying the recipient that their Facebook Page had violated community standards and had been unpublished.

To rectify the situation, the email urged the recipient to click on a provided link to file an appeal. Unbeknownst to them, this link directed them to a phishing page designed to steal their Facebook credentials. Notably, the email displayed flawless grammar and successfully imitated the language typically associated with Meta for Business.

The company also highlighted the substantial challenge these meticulously crafted emails posed regarding human detection. Abnormal found that when faced with emails that lack grammatical errors or typos, individuals are more susceptible to falling victim to such attacks.

“AI-generated email attacks can mimic legitimate communications from both individuals and brands,” Shiebler added. “They’re written professionally, with a sense of formality that would be expected around a business matter, and in some cases they are signed by a named sender from a legitimate organization.”

Shiebler advocates employing AI as the most effective method to identify AI-generated emails.

Abnormal’s platform utilizes open-source large language models (LLMs) to evaluate the probability of each word based on its context. This enables the classification of emails that consistently align with AI-generated language. Two external AI detection tools, OpenAI Detector and GPTZero, are employed to validate these findings.

“We use a specialized prediction engine to analyze how likely an AI system will select each word in an email given the context to the left of that email,” said Shiebler. “If the words in the email have consistently high likelihood (meaning each word is highly aligned with what an AI model would say, more so than in human text), then we classify the email as possibly written by AI.”

However, the company acknowledges that this approach is not foolproof. Certain non-AI-generated emails, such as template-based marketing or sales outreach emails, may contain word sequences similar to AI-generated ones. Additionally, emails featuring common phrases, such as excerpts from the Bible or the Constitution, could result in false AI classifications.

“Not all AI-generated emails can be blocked, as there are many legitimate use cases where real employees use AI to create email content,” Shiebler added. “As such, the fact that an email has AI indicators must be used alongside many other signals to indicate malicious intent.”

To address this issue, Shiebler advises organizations to adopt modern solutions that detect contemporary threats, including highly sophisticated AI-generated attacks that closely resemble legitimate emails. He said that when incorporating, it is important to ensure that these solutions can differentiate between legitimate AI-generated emails and those with malicious intent.

“Instead of looking for known indicators of compromise, which constantly change, solutions that use AI to baseline normal behavior across the email environment — including typical user-specific communication patterns, styles and relationships — will be able to then detect anomalies that may indicate a potential attack, no matter if it was created by a human or by AI,” he explained.

He also advises organizations to maintain good cybersecurity practices, which include conducting ongoing security awareness training to ensure employees remain vigilant against BEC risks.

Furthermore, he said, implementing strategies such as password management and multi-factor authentication (MFA) will enable organizations to mitigate potential damage in the event of a successful attack.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


A study conducted by email security platform Abnormal Security has revealed the growing use of generative AI, including ChatGPT, by cybercriminals to develop extremely authentic and persuasive email attacks.

The company recently performed a comprehensive analysis to assess the probability of generative AI-based novel email attacks intercepted by their platform. This investigation found that threat actors now leverage GenAI tools to craft email attacks that are becoming progressively more realistic and convincing.

Security leaders have expressed ongoing concerns about the impact of AI-generated email attacks since the emergence of ChatGPT. Abnormal Security’s analysis found that AI is now being utilized to create new attack methods, including credential phishing, an advanced version of the traditional business email compromise (BEC) scheme and vendor fraud.

According to the company, email recipients have traditionally relied on identifying typos and grammatical errors to detect phishing attacks. However, generative AI can help create flawlessly written emails that closely resemble legitimate communication. As a result, it becomes increasingly challenging for employees to distinguish between authentic and fraudulent messages.


Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Cybercriminals writing unique content

Business email compromise (BEC) actors often use templates to write and launch their email attacks, Dan Shiebler, head of ML at Abnormal Security, told VentureBeat.

“Because of this, many traditional BEC attacks feature common or recurring content that can be detected by email security technology based on pre-set policies,” he said. “But with generative AI tools like ChatGPT, cybercriminals are writing a greater variety of unique content, based on slight differences in their generative AI prompts. This makes detection based on known attack indicator matches much more difficult while also allowing them to scale the volume of their attacks.”

Abnormal’s research further revealed that threat actors go beyond traditional BEC attacks and leverage tools similar to ChatGPT to impersonate vendors. These vendor email compromise (VEC) attacks exploit the existing trust between vendors and customers, proving highly effective social engineering techniques.

Interactions with vendors typically involve discussions related to invoices and payments, which adds an additional layer of complexity in identifying attacks that imitate these exchanges. The absence of conspicuous red flags such as typos further compounds the challenge of detection.

“While we are still doing full analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks that have AI indicators as a percentage of all attacks, particularly over the past few weeks,” Shiebler told VentureBeat.

Creating undetectable phishing attacks through generative AI

According to Shiebler, GenAI poses a significant threat in email attacks as it enables threat actors to craft highly sophisticated content. This raises the likelihood of successfully deceiving targets into clicking malicious links or complying with their instructions. For instance, leveraging AI to compose email attacks eliminates the typographical and grammatical errors commonly associated with and used to identify traditional BEC attacks.

“It can also be used to create greater personalization,” Shiebler explained. “Imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. Emails will begin to show the typical context, language and tone that the victim expects, making BEC emails even more deceptive.”

The company noted that cybercriminals sought refuge in newly created domains a decade ago. However, security tools quickly detected and obstructed these malicious activities. In response, threat actors adjusted their tactics by utilizing free webmail accounts such as Gmail and Outlook. These domains were often linked to legitimate business operations, allowing them to evade traditional security measures.

Generative AI follows a similar path, as employees now rely on platforms like ChatGPT and Google Bard for routine business communications. Consequently, it becomes impractical to indiscriminately block all AI-generated emails.

One such attack intercepted by Abnormal involved an email purportedly sent by “Meta for Business,” notifying the recipient that their Facebook Page had violated community standards and had been unpublished.

To rectify the situation, the email urged the recipient to click on a provided link to file an appeal. Unbeknownst to them, this link directed them to a phishing page designed to steal their Facebook credentials. Notably, the email displayed flawless grammar and successfully imitated the language typically associated with Meta for Business.

The company also highlighted the substantial challenge these meticulously crafted emails posed regarding human detection. Abnormal found that when faced with emails that lack grammatical errors or typos, individuals are more susceptible to falling victim to such attacks.

“AI-generated email attacks can mimic legitimate communications from both individuals and brands,” Shiebler added. “They’re written professionally, with a sense of formality that would be expected around a business matter, and in some cases they are signed by a named sender from a legitimate organization.”

Measures for detecting AI-generated text 

Shiebler advocates employing AI as the most effective method to identify AI-generated emails.

Abnormal’s platform utilizes open-source large language models (LLMs) to evaluate the probability of each word based on its context. This enables the classification of emails that consistently align with AI-generated language. Two external AI detection tools, OpenAI Detector and GPTZero, are employed to validate these findings.

“We use a specialized prediction engine to analyze how likely an AI system will select each word in an email given the context to the left of that email,” said Shiebler. “If the words in the email have consistently high likelihood (meaning each word is highly aligned with what an AI model would say, more so than in human text), then we classify the email as possibly written by AI.”

However, the company acknowledges that this approach is not foolproof. Certain non-AI-generated emails, such as template-based marketing or sales outreach emails, may contain word sequences similar to AI-generated ones. Additionally, emails featuring common phrases, such as excerpts from the Bible or the Constitution, could result in false AI classifications.

“Not all AI-generated emails can be blocked, as there are many legitimate use cases where real employees use AI to create email content,” Shiebler added. “As such, the fact that an email has AI indicators must be used alongside many other signals to indicate malicious intent.”

Differentiate between legitimate and malicious content

To address this issue, Shiebler advises organizations to adopt modern solutions that detect contemporary threats, including highly sophisticated AI-generated attacks that closely resemble legitimate emails. He said that when incorporating, it is important to ensure that these solutions can differentiate between legitimate AI-generated emails and those with malicious intent.

“Instead of looking for known indicators of compromise, which constantly change, solutions that use AI to baseline normal behavior across the email environment — including typical user-specific communication patterns, styles and relationships — will be able to then detect anomalies that may indicate a potential attack, no matter if it was created by a human or by AI,” he explained.

He also advises organizations to maintain good cybersecurity practices, which include conducting ongoing security awareness training to ensure employees remain vigilant against BEC risks.

Furthermore, he said, implementing strategies such as password management and multi-factor authentication (MFA) will enable organizations to mitigate potential damage in the event of a successful attack.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Victor Dey
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!