AI & RoboticsNews

How generative AI is creating new classes of security threats

generative AI

The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world.

It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition.

Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis.

Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it.

Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues.

Attackers will likely adopt and engineer AI faster than defenders, giving them a clear advantage.  They will be able to launch sophisticated attacks powered by AI/ML at an incredible scale at low cost.

Social engineering attacks will be first to benefit from synthetic text, voice and images. Many of these attacks that require some manual effort — like phishing attempts that impersonate IRS or real estate agents prompting victims to wire money — will become automated.

Attackers will be able to use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to rapidly generate polymorphic code for malware that evades detection from signature-based systems.

One of AI’s pioneers, Geoffrey Hinton, made the news recently as he told the New York Times he regrets what he helped build because “It is hard to see how you can prevent the bad actors from using it for bad things.”

We’ve seen how quickly misinformation can spread thanks to social media. A University of Chicago Pearson Institute/AP-NORC Poll shows 91% of adults across the political spectrum believe misinformation is a problem and nearly half are worried they’ve spread it. Put a machine behind it, and social trust can erode cheaper and faster.

The current AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and when they don’t know how to answer, they make things up. This is often referred to as “hallucinating,” an unintended consequence of this emerging technology. When we search for legitimate answers, a lack of accuracy is a huge problem.

This will betray human trust and create dramatic mistakes that have dramatic consequences. A mayor in Australia, for instance, says he may sue OpenAI for defamation after ChatGPT wrongly identified him as being jailed for bribery when he was actually the whistleblower in a case.

Over the next decade, we will see a new generation of attacks on AI/ML systems.

Attackers will influence the classifiers that systems use to bias models and control outputs. They’ll create malicious models that will be indistinguishable from the real models, which could cause real harm depending on how they’re used. Prompt injection attacks will become more common, too. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to reveal its internal directives.

Attackers will kick off an arms race with adversarial ML tools that trick AI systems in various ways, poison the data they use or extract sensitive data from the model.

As more of our software code is generated by AI systems, attackers may be able to take advantage of inherent vulnerabilities that these systems inadvertently introduced to compromise applications at scale.

The costs of building and operating large-scale models will create monopolies and barriers to entry that will lead to externalities we may not be able to predict yet.

In the end, this will impact citizens and consumers in a negative way. Misinformation will become rampant, while social engineering attacks at scale will affect consumers who will have no means to protect themselves.

The federal government’s announcement that governance is forthcoming serves as a good start, but there’s so much ground to make up to get in front of this AI race.

The nonprofit Future of Life Institute published an open letter calling for a pause in AI innovation. It got plenty of press coverage, with the likes of Elon Musk joining the crowd of concerned parties, but hitting the pause button simply isn’t viable. Even Musk knows this — he has seemingly changed course and started his own AI company to compete.

It was always disingenuous to suggest innovation should be stifled. Attackers certainly won’t honor that request. We need more innovation and more action so that we can ensure that AI is used responsibly and ethically.

The silver lining is that this also creates opportunities for innovative approaches to security that use AI. We will see improvements in threat hunting and behavioral analytics, but these innovations will take time and need investment. Any new technology creates a paradigm shift, and things always get worse before they get better. We’ve gotten a taste of the dystopian possibilities when AI is used by the wrong people, but we must act now so that security professionals can develop strategies and react as large-scale issues arise.

At this point, we’re woefully unprepared for AI’s future.

Aakash Shah is CTO and cofounder at oak9.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world.

It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition.

Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis.

Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues.

Asymmetry in the attacker-defender dynamic

Attackers will likely adopt and engineer AI faster than defenders, giving them a clear advantage.  They will be able to launch sophisticated attacks powered by AI/ML at an incredible scale at low cost.

Social engineering attacks will be first to benefit from synthetic text, voice and images. Many of these attacks that require some manual effort — like phishing attempts that impersonate IRS or real estate agents prompting victims to wire money — will become automated.

Attackers will be able to use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to rapidly generate polymorphic code for malware that evades detection from signature-based systems.

One of AI’s pioneers, Geoffrey Hinton, made the news recently as he told the New York Times he regrets what he helped build because “It is hard to see how you can prevent the bad actors from using it for bad things.”

Security and AI: Further erosion of social trust

We’ve seen how quickly misinformation can spread thanks to social media. A University of Chicago Pearson Institute/AP-NORC Poll shows 91% of adults across the political spectrum believe misinformation is a problem and nearly half are worried they’ve spread it. Put a machine behind it, and social trust can erode cheaper and faster.

The current AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and when they don’t know how to answer, they make things up. This is often referred to as “hallucinating,” an unintended consequence of this emerging technology. When we search for legitimate answers, a lack of accuracy is a huge problem.

This will betray human trust and create dramatic mistakes that have dramatic consequences. A mayor in Australia, for instance, says he may sue OpenAI for defamation after ChatGPT wrongly identified him as being jailed for bribery when he was actually the whistleblower in a case.

New attacks

Over the next decade, we will see a new generation of attacks on AI/ML systems.

Attackers will influence the classifiers that systems use to bias models and control outputs. They’ll create malicious models that will be indistinguishable from the real models, which could cause real harm depending on how they’re used. Prompt injection attacks will become more common, too. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to reveal its internal directives.

Attackers will kick off an arms race with adversarial ML tools that trick AI systems in various ways, poison the data they use or extract sensitive data from the model.

As more of our software code is generated by AI systems, attackers may be able to take advantage of inherent vulnerabilities that these systems inadvertently introduced to compromise applications at scale.

Externalities of scale

The costs of building and operating large-scale models will create monopolies and barriers to entry that will lead to externalities we may not be able to predict yet.

In the end, this will impact citizens and consumers in a negative way. Misinformation will become rampant, while social engineering attacks at scale will affect consumers who will have no means to protect themselves.

The federal government’s announcement that governance is forthcoming serves as a good start, but there’s so much ground to make up to get in front of this AI race.

AI and security: What comes next

The nonprofit Future of Life Institute published an open letter calling for a pause in AI innovation. It got plenty of press coverage, with the likes of Elon Musk joining the crowd of concerned parties, but hitting the pause button simply isn’t viable. Even Musk knows this — he has seemingly changed course and started his own AI company to compete.

It was always disingenuous to suggest innovation should be stifled. Attackers certainly won’t honor that request. We need more innovation and more action so that we can ensure that AI is used responsibly and ethically.

The silver lining is that this also creates opportunities for innovative approaches to security that use AI. We will see improvements in threat hunting and behavioral analytics, but these innovations will take time and need investment. Any new technology creates a paradigm shift, and things always get worse before they get better. We’ve gotten a taste of the dystopian possibilities when AI is used by the wrong people, but we must act now so that security professionals can develop strategies and react as large-scale issues arise.

At this point, we’re woefully unprepared for AI’s future.

Aakash Shah is CTO and cofounder at oak9.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Aakash Shah, oak9
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!