AI & RoboticsNews

Crippling AI cyberattacks are inevitable: 4 ways companies can prepare

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


When Eric Horvitz, Microsoft’s chief scientific officer, testified on May 3 before the U.S. Senate Armed Services Committee Subcommittee on Cybersecurity, he emphasized that organizations are certain to face new challenges as cybersecurity attacks increase in sophistication — including through the use of AI. 

While AI is improving the ability to detect cybersecurity threats, he explained, threat actors are also upping the ante.

“While there is scarce information to date on the active use of AI in cyberattacks, it is widely accepted that AI technologies can be used to scale cyberattacks via various forms of probing and automation…referred to as offensive AI,” he said. 

However, it’s not just the military that needs to stay ahead of threat actors using AI to scale up their attacks and evade detection. As enterprise companies battle a growing number of major security breaches, they need to prepare for increasingly sophisticated AI-driven cybercrimes, experts say. 

Attackers want to make a great leap forward with AI 

“We haven’t seen the ‘big bang’ yet, where ‘Terminator’ cyber AI comes on and wreaks havoc everywhere, but attackers are preparing that battlefield,” Max Heinenmeyer, VP of cyber innovation at AI cybersecurity firm Darktrace, told VentureBeat. What we are currently seeing, he added, is “a big driver in cybersecurity – when attackers want to make a great leap forward, with a mindset shifting attack that will be hugely disruptive.” 

For example, there have been non-AI-driven attacks, such as the 2017 WannaCry ransomware attack, that used what were considered novel cyber weapons, he explained, while today there is malware used in the Ukraine-Russia war that has rarely been seen before. “This kind of mindset-shifting attack is where we would expect to see AI,” he said. 

So far, the use of AI in the Ukraine-Russia war remains limited to Russian use of deepfakes and Ukraine’s use of Clearview AI’s controversial facial recognition software, at least publicly. But security pros are gearing up for a fight: A Darktrace survey last year found that a growing number of IT security leaders are concerned about the potential use of artificial intelligence by cybercriminals. Sixty percent of respondents said human responses are falling to keep up with the pace of cyberattacks, while nearly all (96%) have begun to protect their companies against AI-based threats – mostly related to email, advanced spear phishing and impersonation threats. 

“There have been very few actual research detections of real-world machine learning or AI attacks, but the bad guys are definitely already using AI,” said Corey Nachreiner, CSO of WatchGuard, which provides enterprise-grade security products to mid-market customers. 

Threat actors are already using machine learning to assist in more social engineering attacks. If they get big, big data sets of lots and lots of passwords, they can learn things about that passwords to make their password hacking better.

Machine-learning algorithms will also drive a larger volume of spear-phishing attacks, or highly targeted, non-generic fraudulent emails, than in the past, he said. “Unfortunately, it’s harder to train users against clicking on spear-phishing messages,” he said. 

What enterprises really need to worry about

According to Seth Siegel, North American leader of artificial intelligence consulting at Infosys, security professionals may not think about threat actors using AI explicitly, but they are seeing more, faster attacks and can sense an increased use of AI on the horizon. 

“I think they see it’s getting fast and furious out there,” he told VentureBeat. “The threat landscape is really aggressive compared to last year, compared to three years ago, and it’s getting worse.” 

However, he cautioned, organizations should be worried about far more than spear phishing attacks. “The question really should be, how can companies deal with one of the biggest AI risks, which is the introduction of bad data into your machine learning models?” he said.

These efforts will come not from individual attackers, but from sophisticated nation-state hackers and criminal gangs.

“This is where the problem is – they use the most available technology, the fastest technology, the cutting-edge technology because they need to be able to get not just past offenses, but they’re overwhelming departments that frankly aren’t equipped to handle this level of bad acting,” he said. “Basically, you can’t bring a human tool to an AI fight.” 

4 ways to prepare for the future of AI cyberattacks

Experts say security pros should take several essential steps to prepare for the future of AI cyberattacks: 

Provide continued security awareness training.

The problem with spear phishing, said Nachreiner, is that since the emails are customized to look like true business messages, they are much harder to block. “You have to have security awareness training, so users know to expect and be skeptical of these emails, even if they seem to come in a business context,” he said. 

Use AI-driven tools.

The infosec organization should embrace AI as a fundamental security strategy, said Heinenmeyer. “They shouldn’t wait to use AI or consider it just a cherry on top – they should anticipate and implement AI themselves,” he explained. “I don’t think they realize how necessary it is at the moment – but once threat actors start using more furious automation and maybe, there are more destructive attacks launched against the west, then you really want to have AI.” 

Think beyond individual bad actors.

Companies need to refocus their perspective away from the individual bad actor, said Siegel. “They should think more about nation-state level hacking, around criminal gang hacking, and be able to have defensive postures and also understand that it’s just something they now need to deal with on an everyday basis,” 

Have a proactive strategy.

Organizations also need to make sure they are on top of their security postures, said Siegel. “When patches are deployed, you have to treat them with a level of criticality they deserve,” he explained, “and you need to audit your data and models to make sure you don’t introduce malicious information into the models.”

Siegel added that his organization embeds cybersecurity professionals onto data science teams and also trains data scientists in cybersecurity techniques. 

The future of offensive AI

According to Nachreiner, more “adversarial” machine learning is coming down the pike.

“This gets into how we use machine learning to defend – people are going to use that against us,” he said. 

For example, one of the ways organizations use AI and machine learning today is to proactively catch malware better – since now malware changes rapidly and signature-based malware detection does not catch malware as regularly anymore. However, In the future, those ML models will be vulnerable to attacks by threat actors. 

The AI-driven threat landscape will continue to get worse, said Heinenmeyer, with increasing geopolitical tensions that will contribute to the trend. He cited a recent study from Georgetown University that studied China and how they interweave their AI research universities and nation-state sponsored hacking. “It tells a lot about how closely the Chinese, like other governments, work with academics and universities and AI research to harness it for potential cyber operations for hacking.” 

“As I think about this study and other things happening, I think my outlook on the threats a year from now will be bleaker than today,” he admitted. However, he pointed out that the defensive outlook will also improve because more organizations are adopting AI. “We’ll still be stuck in this cat-mouse game,” he said.  

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!