AI & RoboticsNews

Real-world AI threats in cybersecurity aren’t science fiction

For some, fears of AI lie in images of robot overlords and self-aware malware — the stuff of science fiction. Among the many threats we will deal with in the coming years, sentient AI taking over the world isn’t one of them. But AI that empowers cybercriminals is a very serious reality, even as some espouse the benefits of AI in cybersecurity.

Over the past decade, advances in technology have reduced the time criminals need to modify malware samples or identify vulnerabilities. Those tools have become readily available, leading to an increase in the development and distribution of “regular” threats, like adware, trojans, and ransomware.

As a result, we’re going to see more — and more sophisticated — AI-empowered threats. The question is, will security controls that currently protect networks scale to match the flood of attacks?

Microsoft and Google are just two of the companies developing application fuzzing tools — basically, automated vulnerability discovery — that use machine learning to find bugs in software before criminals do. But it’s not a reach to assume that an AI-empowered system could identify when one of its malware variants is being detected and how and then push that information to another system that can pump out new versions of the malware, with modifications to keep it undetected.

This isn’t science fiction; it’s how malware authors operate today. And while the initial development of the malware executable is usually done manually, you can automate a system that quickly identifies how to modify the malware to best evade detection. The result is a malware family that appears unstoppable. For every malware variant that gets detected, another is quickly deployed to replace it — and modified to evade previous detections.

AI against users, not just systems

The softest targets for AI-empowered attacks are not necessarily vulnerable systems, but rather the human users behind those systems. For example, AI technology that can scrape personal identifiable information (PII) and gather social media information about a potential victim could help criminals craft social engineering efforts that go into more detail and are more convincing than anything we typically see from human attackers.

Data scrapers are a type of software that navigates to websites and finds all relevant data hosted on that page. Data is then stored in a database, where it can be cataloged, organized, and analyzed by humans (or human-instructed software) to meet data collectors’ needs. It’s a common tactic used by everyone from intelligence analysts to advertisers. Attackers can use tools to automatically associate pieces of that data (email addresses, phone numbers, names, etc.) to create a profile of a potential target. With that profile, they can use AI to craft specialized emails that increase the chance of a user becoming infected or falling victim to an attack.

Malicious email campaigns are dominated by two techniques: phishing and spear phishing. “Phishing” is when an attacker plans an infection campaign using email subject lures that anyone could fall for, like a bank statement or a package delivery notice. “Spear phishing” involves collecting data on a target and crafting a more personalized email, maximizing the likelihood that the target will interact with the message.

Spear phishing has been primarily used against governments and businesses in recent years. Most consumer email attacks didn’t employ spear phishing in the past because acquiring sufficient data on any given target was so time-intensive, and the potential payoff from such attacks on average individuals was not lucrative enough. This will change as AI tools that can scrape data dumps from breaches, non-private social media accounts, and any other publicly available information make spear phishing much easier. That means most of the phishing emails deployed in the coming years will be spear phishing, virtually guaranteeing that this kind of attack will be more effective than ever before.

Still, using AI-empowered data collection systems to craft attacks is not foolproof. There are ways of mitigating spear phishing attacks. For example, smart email filters or savvy employees may recognize and isolate the email before it can infect a network.

However, these improved collection tools could also uncover personal information about a target, like an account on an extramarital dating service or old social media posts that make the subject look bad. A human attacker could use this information to blackmail a target in order to gain access or credentials, even using the target to manually install backdoor malware.

Automated harassment

Data theft, blackmail, and a flood of undetected malware aside, trolls and stalkers will also benefit from this technology.

Cybercriminals (or even just angry, self-righteous users, like those who think doxing or disrupting services will make the world a better place) can use AI tech to launch harassment campaigns that result in disruption or denial of services, ruined reputations, or just the kind of old-fashioned harassment people encounter on the internet every day. The victims of this form of attack could be businesses, private individuals, or public figures, and attacks might take the form of revenge porn, bullying, or fake social media accounts used to spread lies.

Tactics could also include phone calls using voice over IP (VoIP) services and extend to friends, loved ones, and employers. The kind of harassment we’re talking about isn’t a new approach; it’s just automating what victims already experience. Trolls and stalkers often spend a lot of time gathering information to use against their targets and conducting harassment efforts. If that entire process could be automated, it would create a hell-on-earth scenario for their victims.

“Hacktivists” and others could also wage this kind of attack against business rivals, governments, and political opponents. Combine that with how easy it is to hide your identity online, and we could see a huge increase in targeted harassment campaigns that are unrelenting and likely untraceable.

Easy access to AI platforms

Malicious developers are experimenting with AI technology to find new attack methods and supercharge existing ones. At the same time, universities, independent developers, and organizations around the globe are making AI technology more accessible to anyone who needs it. So once AI tech is used to empower an attack campaign, similar follow-on attacks are all but guaranteed.

Look no further than Hidden Tear, an open source ransomware project created for “educational” purposes by Turkish researcher Utku Sen. Novice ransomware developers used the code Sen released as the framework for numerous new ransomware families, like FSociety ransomware, for years afterward.

Ransomware family CryptoLocker — the first of its kind to use professional grade encryption — marked the development of new ransomware in October 2013. Before that, many ransomware families were poorly programmed, making it possible to create tools to decrypt the files of infected victims.

Unfortunately, CryptoLocker kicked off a trend that other ransomware developers have copied. Today, most algorithms used by modern ransomware can’t be decrypted because they’re built with asymmetric encryption that requires different keys to encrypt and decrypt data. The only ways security researchers can create decryptors for modern ransomware families is if the code is so poorly implemented that the encryption doesn’t work as it should or they are able to obtain keys from a breached command or control server.

All it takes is one criminal understanding new technology well enough to shape it into an attack tool and then share it. From there, copycat malware authors will be able to build off that initial model and evolve it to become more specialized and capable.

In the age of AI, we are making the same mistakes we’ve made dozens of times before — developing and releasing technologies that can easily fall out of our control without first securing our existing infrastructure. Unfortunately, the consequences of such errors are only going to escalate.


Author: Adam Kujawa, Malwarebytes.
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!