AI & RoboticsNews

McAfee CTO: How AI is changing both cybersecurity and cyberattacks

Artificial intelligence is sweeping through almost every industry, layering a new level of intelligence on the software used for things like delivering better cybersecurity. McAfee, one of the big players in the industry, is adding AI capabilities to its own suite of tools that protect users from increasingly automated attacks.

A whole wave of startups — like Israel’s Deep Instinct — have received funding in the past few years to incorporate the latest AI into security solutions for enterprises and consumers. But there isn’t yet a holy grail for protectors working to use AI to stop cyberattacks, according to McAfee chief technology officer Steve Grobman.

Grobman has spoken at length about the pros and cons of AI in cybersecurity, where a human element is still necessary to uncover the latest attacks.

One of the challenges of using AI to improve cybersecurity is that it’s a two-way street, a game of cat and mouse. If security researchers use AI to catch hackers or prevent cyberattacks, the attackers can also use AI to hide or come up with more effective automated attacks.

Grobman is particularly concerned about the ability to use improved computing power and AI to create better deepfakes, which make real people appear to say and do things they haven’t. I interviewed Grobman about his views for our AI in security special issue.

Here’s an edited transcript of our interview.

Above: Steve Grobman, CTO of McAfee, believes in cybersecurity based on human-AI teams.

Image Credit: McAfee

VentureBeat: I did a call with Nvidia about their tracking of AI. They said they’re aware of between 12,000 and 15,000 AI startups right now. Unfortunately, they didn’t have a list of security-focused AI startups. But it seems like a crowded field. I wanted to probe a bit more into that from your point of view. What’s important, and how do we separate some of the reality from the hype that has created and funded so many AI security startups?

Steve Grobman: The barrier to entry for using sophisticated AI has come way down, meaning that almost every cybersecurity company working with data is going to consider and likely use AI in one form or another. With that said, I think that hype and buzz around AI makes it so that it’s one of the areas that companies will generally call out, especially if they’re a startup or new company, where they don’t have other elements to base their technology or reputation [on] yet. It’s a very easy thing to do in 2019 and 2020, to say, “We’re using sophisticated AI capabilities for cybersecurity defense.”

If you look at McAfee as an example, we’re using AI across our product line. We’re using it for classification on the back end. We’re using it for detection of unknown malicious activity and unknown malicious software on endpoints. We’re using a combination of what we call human-machine teaming, security operators working with AI to do investigations and understand threats. We have to be ready for AI to be used by everyone, including the adversaries.

VentureBeat: We always talked about that cat and mouse game that happens, when either side of the cyberattackers or defenders turns up the pressure. You have that technology race: If you use AI, they’ll use AI. As a reality check on that front, have you seen that happen, where attackers are using AI?

Grobman: We can speculate that they are. It’s a bit difficult to know definitively whether certain types of attacks have been guided with AI. We see the results of what comes out of an event, as opposed to seeing the way it was put together. For example, one of the ways an adversary can use AI is to optimize which victims they focus on. If you think about AI as being good for classification problems, having a bad actor identify the most vulnerable victims, or the victims that will yield the highest return on investment — that’s a problem that AI is well-suited for.

Part of the challenge is we don’t necessarily see how they select the victims. We just see those victims being targeted. We can surmise that because they chose wisely, they likely did some of that analysis with AI. But it’s difficult to assert that definitively.

The other area that AI is emerging [in] is … the creation of content. One thing we’ve worried about in security is AI being used to automate customized phishing emails, so you basically have spear phishing at scale. You have a customized note with a much higher probability that a victim will fall for it, and that’s crafted using AI. Again, it’s difficult to look at the phishing emails and know if they were generated definitively by a human, or with help from AI-based algorithms. We clearly see lots going on in the research space here. There’s lots of work going on in autogenerating text and audio. Clearly, deepfakes are something we see a lot of interest in from an information warfare perspective.

Steve Grobman: I didn't say that.

Above: Grobman did a demo of deepfakes at RSA in 2019.

Image Credit: RSA

VentureBeat: That’s related to things like facial recognition security, right?

Grobman: There are elements related to facial recognition. For example, we’ve done some research where we look at — could you generate an image that looks like somebody that’s very different [from] what a facial recognition system was trained on, and so fool the system into thinking that it’s that actual person that the system is looking for? But I also think there’s the information warfare side of it, which is more about convincing people that something happened — somebody said or did something that didn’t actually happen. Especially as we move closer to the 2020 election cycle, recognizing that deepfakes for the purpose of information warfare is one of the things we need to be concerned about.

VentureBeat: Is that something for Facebook or Twitter to work on, or is there a reason for McAfee to pay attention to that kind of fraud?

Grobman: There are a few reasons McAfee is looking at it. Number one, we’re trying to understand the state of the art in detection technology, so that if a video does emerge, we have the ability to provide the best assessment for whether we believe it’s been tampered with, generated through a deepfake process, or has other issues. There’s potential for other types of organizations, beyond social media, to have forensic capability. For example, the news media. If someone gives you a video, you would want to be able to understand the likelihood of whether it’s authentic or manipulated or fake.

We see this all the time with fake accounts. Someone will create an account called “AP Newsx,” or they slightly modify a Twitter handle and steal images from the correct account. Most people, at a glance, think that’s the AP posting a video. The credibility of the organization is one thing that can lend credibility to a piece of content, and that’s why reputable organizations need tools and technology to help determine what they should believe as the ground truth, versus what they should be more suspicious of.

VentureBeat: It’s almost like you’re getting ready for a day when deepfakes are used in some kind of breach because we’re getting used to the idea of virtual people. I went to a Virtual Beings Summit earlier this year, and it was all about creating artificial people that seem like they’re real. That includes things like virtual influencers that put on concerts in Japan. But using these for deception purposes is where it comes back to you …

Grobman: That’s the interesting point. The same technology can be used for good and for evil objectives. If you can make a person look and sound authentic, you can think about good uses for that. Someone in late stages of Parkinson’s disease or another disorder that challenges their ability to speak — if you can provide them with technology that allows them to communicate with their loved ones, even in the late stages of a debilitating disease, that’s clearly a positive use of this technology.

The flip side is having a CEO [appear to] make statements that their product is being recalled, or that earnings are at one level when they’re actually at a very different level, and making stock prices move on that information. That opens the avenue for all kinds of financial crimes, where instead of having to steal data, criminals can manipulate markets through misinformation.

The Virtual Beings Summit drew hundreds to Fort Mason in San Francisco.

Above: The Virtual Beings Summit drew hundreds to Fort Mason in San Francisco.

Image Credit: Dean Takahashi

VentureBeat: You’ve identified something called “model hacking,” attacks on machine learning systems themselves?

Grobman: We’re doing a lot of work on adversarial AI techniques and defenses. We’re getting ready for criminals to be using techniques that make AI models less effective. Some of the research we’re doing is to best understand how those adversarial techniques work, but then we’re also working on mitigations to make our models more robust and less susceptible to some of those capabilities. That’s a very active area of focus.


Author: Dean Takahashi.
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!