AI & RoboticsNews

The persistent humanity in AI and cybersecurity

This article is part of a VB special issue. Read the full series here: AI and Security.


Even as AI technology transforms some aspects of cybersecurity, the intersection of the two remains profoundly human. Although that’s perhaps counterintuitive, it is front and center in all parts of the cybersecurity triad: the bad actors who seek to do harm, the gullible soft targets, and the good actors who fight back.

Even without the looming specter of AI, the cybersecurity battlefield is often opaque to average users and the technologically savvy alike. Adding a layer of AI, which comprises numerous technologies that can feel unexplainable to most people, may seem doubly intractable — as well as impersonal. That’s because although the cybersecurity fight is sometimes deeply personal, it’s rarely waged in person.

But it is waged people. It’s attackers at their computers in one place launching attacks on people in another place, and those attacks are ideally being thwarted by defenders at their computers in yet another place. Understanding that dynamic frames how we can understand the roles of people in cybersecurity and why even the advent of AI doesn’t fundamentally change it.

Irreplaceable humans

In a way, AI’s impact on the field of cybersecurity is no different from its impact on other disciplines, in that people often grossly overestimate what AI can do. They don’t understand that AI often works best when it has a narrow application, like anomaly detection, versus a broader one, like engineering a solution to a threat.

Unlike humans, AI lacks ingenuity. It is not creative. It is not clever. It often fails to take into account context and memory, leaving it unable to interpret like a human brain does.

In an interview with VentureBeat, LogicHub CEO and cofounder Kumar Saurabh illustrated the need for human analysts with a sort of John Henry test for automated threat detection. “A couple of years ago, we did an experiment,” he said. This involved pulling together a certain amount of data — a trivial amount for an AI model to sift through, but a reasonably large amount for a human analyst — to see how teams using automated systems would fare against humans in threat detection.

“I’ve given the data to about 40 teams so far. Not a single team has been able to pick that [threat] up in an automated way,” he said. “In some ways, we know the answer that it doesn’t take much to bypass machine-driven threat detection. How about we give it to really sophisticated analysts?,” he asked. According to Saurabh, within one to three hours, 25% of the human security professionals had cracked it. What’s more, they were able to explain to Saurabh how they had figured it out.

The twist: The experiment involved a relatively tiny amount of data, and it still took hours for skilled analysts to find the threat. “At that speed, you’d need 5,000 security analysts [to get through a real-world amount of data],” Saurabh said, which would be literally billions of data points generated daily.

“Clearly, that doesn’t work either,” he said. “And this is where the intersection of AI threat detection comes in. We need to take the machine[s] and make them as intelligent as those security analysts who have 10 years, 15 years of experience in threat detection.” He argued that although there’s been progress toward that goal, it’s a problem that hasn’t been solved very well — and likely won’t be for decades.

That’s because what AI can do in cybersecurity right now is narrow. Pitched against artificial general intelligence (AGI) — the holy grail of thinking machines that does not yet exist — it’s laughable how far away our current AI tools are from approaching what a skilled security professional can do. “All people have general purpose intelligence,” said Saurabh. “[But] even if you teach an AI to drive, it can’t make coffee.”

Dr. Ricardo Morla, professor at the University of Porto, told VentureBeat that one way to understand the collaboration between humans and machines is in terms of cognitive resources. “As cars get smarter, the human ends up releasing cognitive resources required … to switch on the lights when it’s dark, [control] the clutch on an uphill start, or … actually [drive] the car, and using these resources for other tasks,” he said.

But, he added, “We are not at the point where the human in a security operations center or the human behind a massive botnet can just go home and leave it to the machine to get the job done.” He pointed to tasks like intrusion detection and automated vulnerability scanning that require security pros to supervise “if not during the actual learning and inference, definitely while reviewing results, choosing relevant learning data scenarios and models, and assessing robustness of the model against attacks through adversarial learning.” He also suggested that humans are needed “to oversee performance and effectiveness and to design attack goals and defense priorities.”

There are some security-related tasks for which AI is better suited. Caleb Fenton is head of innovation for SentinelOne, a company that specializes in using AI and machine learning for endpoint detection. He believes that AI has helped software makers develop their tools faster. “Programmers don’t have to write really complicated functions anymore that might take … many months of iteration and trying,” he said. “Now, the algorithm writes the function for them. And all you need is data and labels.”

He said that using AI to detect threats has been a “net win” for both static (i.e., looking at files) and behavioral (i.e., how programs behave) approaches to threat detection. But he allows that a tool’s a tool, and that “it’s only as good as the person using it.”

Steve Kommrusch is a Ph.D. candidate at Colorado State University who is presently focused on machine learning but has already spent 28 years as a computer engineer at companies such as HP and AMD. He echoed Fenton’s assertions. “The AI can help identify risky software coding styles, and this can allow larger amounts of safe code to be written quickly. Certain tasks — perhaps initial bug triage or simple bug fixing — might be done by AI instead of humans,” he said. “But deciding which problems need solving, architecting data structure access, [and] developing well-parallelizable algorithms will still need humans for quite a while.”

For the foreseeable future, then, the question is not whether machines will replace humans in cybersecurity, but how effectively they can augment what human security professionals can do.

This idea of augmentation versus replacement spans many of the industries that AI touches. But it’s notable that it appears to hold true in the complex field of cybersecurity.

Saurabh sees it as simply a specialization of labor — people spending more time doing things only people can do.

“For a different class of problems, you have to pick the right tools,” he said. “If you have a nail, you use a hammer. If you have a screw, you’re going to use a screwdriver. AI is not this homogeneous thing. One technique is patently AI, and another technique is patently not AI, right? There are a lot of different kinds of techniques, and many times it depends on what problem you’re trying to solve.”

Humans are still the weakest link

Ironically, even as human defenders remain crucial to the cybersecurity battle, they make persistently soft targets. It doesn’t matter how hidden a door is or how thick it is or how many locks it has; the easiest way to break in is to get someone with the keys to unlock it for you.

And the holders of keys are people — who can be tricked and manipulated, are sometimes ignorant, often make mistakes, and suffer lapses in judgment. If we open a malicious file by accident or foolishly hand over our sensitive login or financial information to a criminal, the cybersecurity defender’s task becomes difficult or nearly impossible.

People will continue to be primary targets, not just because we are often easy marks, but because our metaphorical (and sometimes literal) keys unlock so much. “The human still has control over most goodies — bank accounts, valuable information, and resource-rich systems,” Morla said.

It’s not all bad news, though. Fenton agreed that people are the weakest link and always have been, but he also believes that the cybersecurity industry is getting better at protecting us from ourselves. “I think we’re mitigating that more and more,” he said. “Even if the user does something wrong and they run malware, if it behaves badly, we kill it.”

“We may see a rise of malicious AI-to-human interactions with human targets with text-to-speech and intelligent call center-like AI tools getting appropriated by attackers,” said Morla.

Notably, Kommrusch brought up a similar scenario. “Sadly, I do think that AI chatbots and robocalls have improved and will continue to improve. One could imagine [attacks involving] scammers cold-calling lots of folks with a nefarious AI chatbot that would hand off to a human attacker after the first 20-30 seconds of ‘hook,’” he said.

Both researchers point out that such attacks would need to be extremely convincing to work. “The AI would have to be good enough not only to avoid being detected as an AI in an intrinsic Turing test that the human target would apply — but also to mislead the human target into trusting the AI to the point of having the human provide the goodies (access credentials, etc.) to the AI,” said Morla.

Kommrusch, similarly, said that those sorts of systems could feel less “human” to a cautious target, but he warned that automation could significantly increase the number of attacks. Thus, even if the per-attempt success rate of the attacks was low, they may still be worth the minimal effort attackers put into them.

Morla suggests that one way to reduce the effectiveness of these kinds of attacks is simply to educate people. When people know what a suspicious email looks like, they’re far less likely to open a poisoned attachment or click a bad link.

In addition to education, people can use tools to help stay safe. “What would be beneficial to users would be an automated security quality grader based on AI that could allow users to assess security risk when adding an application to their phone or laptop,” said Kommrusch.

And some advice from the pre-AI cybersecurity days is still prescient, such as using two-step verification for sensitive data like bank accounts and employing off-the-shelf security products. “For example, there will be applications (like McAfee) which add AI to their protections; the end user can download the app to get a quality AI defense,” Kommrusch said.

AI versus AI

None of the above is to say that targets are human. “There will be cases where access control mechanisms are implemented using AI and where the AI may become a target,” Morla said. He listed examples, such as efficiently finding malignant samples that look benign to the human but force the AI to misclassify it; poisoning the data set used for learning and thus preventing the AI from adequately learning; and reverse engineering the AI to find models and watermarking the AI for copyright.

“So while the human may still be the weakest link, bringing AI into cybersecurity adds another weak link to the chain.”

Fenton mentioned some of these AI-fooling adversarial techniques, too, like changing a few pixels in an image to trip up a machine learning model. To the human analyst, the altered picture would clearly be of, say, a panda, but the model may think it’s a school bus. He said some people have adapted that technique on binary files, altering them slightly to make malicious files look benign. The trick may be effective, but he said it doesn’t actually pose a threat yet, because none of the files are executable — thus, such an attack is only theoretical at this point.

And it could remain so for a while, because there may not be sufficient impetus for attackers to innovate. “This will sound weird, but I’m hoping that we’ll start seeing some new attacks soon, because that [will mean] we’re putting a lot of pressure on malware authors,” Fenton said. The lack of innovation from the bad actors’ side, in other words, may indicate that what they’ve been doing all along is still sufficiently lucrative. “It would be kind of a shame if I have this AI approach, and we’re getting better at detecting malware, but you don’t see any new attacks. It means we’re not really affecting the bottom line,” he added.

Still, it’s reassuring if security companies are prepared for the next wave of innovative attacks, whenever they may come.

Simple motivations

This speaks to an often overlooked aspect of cybersecurity, which is that attackers are primarily motivated by the same thing all thieves are: money.

“Attackers will usually come up with the cheapest, dumbest, most boring solution to a problem that works. Because they’re thinking cost/benefit analysis. They’re not trying to be clever,” Fenton said. That’s key to understanding the cybersecurity world, because it helps show how narrow the scope of it is. Fenton calls it a goal-oriented market, both for attackers and defenders. And for attackers, the goal is largely financial.

“I’ve been consistently disappointed with how attackers actually behave,” he said. “I dream up all these elaborate scenarios that we could look for and find really cool malware. And almost all the time, [the attackers] just pivot slightly, they change one little thing, and they keep going, and it’s successful for them. That’s all they really care about.”

That means they’re likely to use AI to ramp up their attacks only when and if the cost/balance ratio works for them, perhaps by using off-the-shelf attacks. Those cheap and easy tools are likely on their way, and will proliferate. “AI techniques for attackers will get shared, allowing novice attackers to use sophisticated AI algorithms for their attacks,” Kommrusch warned. But even so, most of the use cases will likely be somewhat unimpressive tasks like crafting more convincing phishing emails.

People versus people

People are always at both ends of the attacker-victim dyad. There is no software that becomes sentient, turns itself into malware, and then chooses to make an attack; it’s always a person who sets out to do accomplish some task at the expense of others. And although it’s true that a cyberattack is about compromising or capturing systems — not people per se — the reason any target is lucrative is because there are humans at the end of it who will cough up ransomware money or inadvertently open a breach into a system that has value for the attacker.

In the end, even as AI may enhance some aspects of cyberattacks and some aspects of cyberdefense, it’s all still so profoundly human. It’s just that the tools and attack vectors change. But there is still a person who attacks, a person who is a target, and a person who defends. Same as it ever was.

Read More: VentureBeat's Special Issue on AI and Security


Author: Seth Colaner.
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!