AI & RoboticsNews

Is AI cybersecurity’s salvation or its greatest threat?

This article is part of a VB special issue. Read the full series here: AI and Security.


If you’re uncertain whether AI is the best or worst thing to ever happen to cybersecurity, you’re in the same boat as experts watching the dawn of this new era with a mix of excitement and terror.

AI’s potential to automate security on a broader scale offers a welcome advantage in the short term. Yet unleashing a technology designed to eventually take humans out of the equation as much as possible naturally gives the industry some pause. There is an undercurrent of fear about the consequences if things run amok or attackers learn to make better use of the technology.

“Everything you invent to defend yourself can also eventually be used against you,” said Geert van der Linden, an executive vice president of cybersecurity for Capgemini. “This time does feel different, because more and more, we are losing control as human beings.”

In VentureBeat’s second quarterly special issue, we explore this algorithmic angst across multiple stories, looking at how important humans remain in the age of AI-powered security, how deepfakes and deep media are creating a new security battleground even as the cybersecurity skills gap is a concern, surveillance powered by AI cameras is on the rise, AI-powered ransomware is rearing its head, and more.

Each evolution of computing in recent decades has brought new security threats and new tools to fight them. From networked PCs to cloud computing to mobile, the trend is always toward more data stored in ways that introduce unfamiliar vulnerabilities, larger attack vectors, and richer targets that attract increasingly well-funded bad actors.

The AI security era is coming into focus quickly, and the design of these security tools, the rules that govern them, and the way they’re deployed carry increasingly high stakes. The race is on to determine whether AI will help keep people and businesses secure in an increasingly connected world or push us into the digital abyss.

Financial incentives

In a hair-raising prediction last year, Juniper Research forecast that the annual cost of data breaches will increase from $3 trillion in 2019 to $5 trillion in 2024. This will be due to a mix of fines for regulation violations, lost business, and recovery costs. But it will also be driven by a new variable: AI.

“Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” reads Juniper’s report. “The research also highlights that the evolution of deepfakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.”

Given that every business is now a digital business to some extent, spending on infrastructure defense is exploding. Research firm Cybersecurity Ventures notes that the global cybersecurity market was worth $3.5 billion in 2014 but increased to $120 billion in 2017. It projects that spending will grow to an annual average of $200 billion over the next five years. Tech giant Microsoft alone spends $1 billion each year on cybersecurity.

With projections of a 1.8 million-person shortfall for the cybersecurity workforce by 2022, this spending is due in part to the growing costs of recruiting talent. AI boosters believe the technology will reduce costs by requiring fewer humans while still making systems safe.

“When we’re running security operation centers, we’re pushing as hard as we can to use AI and automation,” said Dave Burg, EY Americas’ cybersecurity leader. “The goal is to take a practice that would normally maybe take an hour and cut it down to two minutes, just by having the machine do a lot of the work and decision-making.”

AI to the rescue

In the short-term, companies are bubbling with optimism that AI can help them turn the tide against the mounting cybersecurity threat.

In a report on AI and cybersecurity last summer, Capgemini reported that 69% of enterprise executives surveyed felt AI would be essential for responding to cyberthreats. Telecom led all other industries, with 80% of executives counting on AI to shore up defenses. Utilities executives were at the low end, with only 59% sharing that opinion.

Overall bullishness has triggered a wave of investments in AI cybersecurity, to bulk up defenses, but also to pursue a potentially lucrative new market.

Early last year, Comcast made a surprise move when it announced the acquisition of BluVector, a spinoff of defense contractor Northrop Grumman that uses artificial intelligence and machine learning to detect and analyze increasingly sophisticated cyberattacks. The telecommunications giant said it wanted to use the technology internally, but also continue developing it as a service it could sell to others.

Subsequently, Comcast launched Xfinity xFi Advanced Security, which automatically provides security for all the devices in a customer’s home that are connected to its network. It created the service in partnership with Cujo AI, a startup based in El Segundo, California that developed a platform to spot unusual patterns on home networks and send Comcast customers instant alerts.

Cujo AI founder Einaras von Gravrock said the rapid adoption of connected devices in the home and the broader internet of things (IoT) has created too many vulnerabilities to be tracked manually or blocked effectively by conventional firewall software. His startup turned to AI and machine learning as the only option to fight such a battle at scale.

Von Gravrock argued that spending on such technology is less of a cost and more of a necessity. If a company like Comcast wants to convince customers to use a growing range of services, including those arriving with the advent of 5G networks, the provider must be able to convince people they are safe.

“When we see the immediate future, all operators will have to protect your personal network in some way, shape, or form,” von Gravrock said.

Capgemini’s aforementioned report found that overall, 51% of enterprises said they were heavily using some kind of AI for detection, 34% for prediction, and 18% to manage responses. Detection may sound like a modest start, but it’s already paying big dividends, particularly in areas like fraud detection.

Paris-based Shift has developed algorithms that focus narrowly on weeding out fraud in insurance. Shift’s service can spot patterns in data — such as contracts, reports, photos, and even videos that are processed by insurance companies. With more than 70 clients, Shift has amassed a huge amount of data that has allowed it to rapidly fine-tune its AI. The intended result is more efficiency for insurance companies and a better experience for customers, whose claims are processed faster.

The startup has grown quickly after raising $10 million in 2016, $28 million in 2017, and $60 million last year. Cofounder and CEO Jeremy Jawish said the key was adopting a narrow focus in terms of what it wanted to do with AI.

“We are very focused on one problem,” Jawish said. “We are just dealing with insurance. We don’t do general AI. That allows us to build up the data we need to become more intelligent.”

The dark side

While this all sounds potentially utopian, a dystopian twist is gathering momentum. Security experts predict that 2020 could be the year hackers really begin to unleash attacks that leverage AI and machine learning.

“The bad [actors] are really, really smart,” said Burg of EY Americas. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.”

In an experiment back in 2016, cybersecurity company ZeroFox created an AI algorithm called SNAPR that was capable of posting 6.75 spear phishing tweets per minute that reached 800 people. Of those, 275 recipients clicked on the malicious link in the tweet. These results far outstripped the performance of a human, who could generate only 1.075 tweets per minute, reaching only 125 people and convincing just 49 individuals to click.

Likewise, digital marketing firm Fractl demonstrated how AI could unleash a tidal wave of fake news and disinformation. Using publicly available AI tools, it created a website that includes 30 highly polished blog posts, as well as an AI-generated headshot for the non-existent author of the posts.

And then there is the rampant use of deepfakes, which employ AI to match images and sound to create videos that in some cases are almost impossible to identify as fake. Adam Kujawa, the director of Malwarebytes Labs, said he’s been shocked at how quickly deepfakes have evolved. “I didn’t expect it to be so easy,” he said. “Some of it is very alarming.”

In a 2019 report, Malwarebytes listed a number of ways it expects bad actors to start using AI this year. That includes incorporating AI into malware. In this scenario, the malware uses AI to adapt in real time if it senses any detection programs. Such AI malware will likely be able to target users more precisely, fool automated detection systems, and threaten even larger stashes of personal and financial information.

“I should be more excited about AI and security, but then I look at this space and look at how malware is being built,” Kujawa said. “The cat is out of the bag. Pandora’s box has been opened. I think this technology is going to become the norm for attacks. It’s so easy to get your hands on and so easy to play with this.”

Researchers in computer vision are already struggling to thwart attacks designed to disrupt the quality of their machine learning systems. It turns out that these learning systems remain remarkably easy to fool using “adversarial attacks.” External third parties can detect how a machine learning system works and then introduce code that confuses the system and causes it to misidentify images.

Even worse is that leading researchers acknowledge we don’t really have a solution for stopping mischief makers from wreaking havoc on these systems.

“Can we defend against these attacks?” asked Nicolas Papernot, an AI researcher at Google Brain, during a presentation in Paris last year. “Unfortunately, the answer is no.”

Offense playing defense

In response to possible misuse of AI,  the cybersecurity industry is doing what it’s always done during such technology transitions — try to stay one step ahead of malicious players.

Back in 2018, BlackBerry acquired cybersecurity startup Cylance for $1.4 billion. Cylance had developed an endpoint protection platform that used AI to look for weaknesses in networks and shut them down if necessary. Last summer, BlackBerry created a new business unit led by its CTO that focuses on cybersecurity research and development (R&D). The resulting BlackBerry Labs has a dedicated team of 120 researchers. Cylance was a cornerstone of the lab, and the company said machine learning would be among the primary areas of focus.

Following that announcement, in August the company introduced BlackBerry Intelligent Security, a cloud-based service that uses AI to automatically adapt security protocols for employees’ smartphones or laptops based on location and patterns of usage. The system can also be used for IoT devices or, eventually, autonomous vehicles. By instantly assessing a wide range of factors to adjust the level of security, the system is designed to keep a device just safe enough without having to always require maximum security settings an employee might be tempted to circumvent.

“Otherwise, you’re left with this situation where you have to impose the most onerous security measures, or you have to sacrifice security,” said Frank Cotter, senior vice president of product management at BlackBerry. “That was the intent behind Cylance and BlackBerry Labs, to get ahead of the malicious actors.”

San Diego-based MixMode is also looking down the road and trying to build AI-based security tools that learn from the limitations of existing services. According to MixMode CTO Igor Mezic, existing systems may have some AI or machine learning capability, but they still require a number of rules that limit the scope of what they can detect and how they can learn and require some human intervention.

“We’ve all seen phishing emails, and they’re getting way more sophisticated,” Mezic said. “So even as a human, when I look at these emails and try to figure out whether this is real or not, it’s very difficult. So, it would be difficult for any rule-based system to discover, right? These AI methodologies on the attack side have already developed to the place where you need human intelligence to figure out whether it’s real. And that’s the scary part.”

AI systems that still include some rules also tend to throw off a lot of false positives, leaving security teams overwhelmed and eliminating any initial advantages that came with automation, Mezic said. MixMode, which has raised about $13 million in venture capital, is developing what it describes as “third-wave AI.”

In this case, the goal is to make AI security more adaptive on its own rather than relying on rules that need to be constantly revised to tell it what to look for. MixMode’s platform monitors all nodes on a network to continually evaluate typical behavior. When it spots a slight deviation, it analyzes the potential security risk and rates it from high to low before deciding whether to send up an alert. The MixMode system is always updating its baseline of behavior so no humans have to fine-tune the rules.

“Your own AI system needs to be very cognizant that an external AI system might be trying to spoof it or even learn how it operates,” Mezic said. “How can you write a rule for that? That’s the key technical issue. The AI system must learn to recognize whether there are any changes on the system that like they’re being made by another AI system. Our system is designed to account for that. I think we are a step ahead. So let’s try to make sure that we keep being a step ahead.”

Yet this type of “unsupervised AI” starts to cross a frontier that makes some observers nervous. It will eventually be used not just in business and consumer networks, but also in vehicles, factories, and cities. As it takes on predictive duties and makes decisions about how to respond, such AI will balance factors like loss of life against financial costs.

Humans will have to carefully weigh whether they are ready to cede such power to algorithms, even though they promise massive efficiencies and increased defensive power. On the other hand, if malicious actors are mastering these tools, will the rest of society even have a choice?

“I think we have to make sure that as we use the technology to do a variety of different things … we also are mindful that we need to govern the use of the technology and realize that there will likely be unforeseen consequences,” said Burg of EY Americas. “You really need to think through the impact and the consequences, and not just be a naive believer that the technology alone is the answer.”

Read More: VentureBeat's Special Issue on AI and Security


Author: Chris O’Brien.
Source: Venturebeat

Related posts
AI & RoboticsNews

DataRobot launches Enterprise AI Suite to bridge gap between AI development and business value

AI & RoboticsNews

Qwen2.5-Coder just changed the game for AI programming—and it’s free

AI & RoboticsNews

How Writer has built an enterprise platform Blueprint that does the AI for you

DefenseNews

Trump picks Fox commentator Pete Hegseth as his next Defense Secretary

Sign up for our Newsletter and
stay informed!