AI & RoboticsNews

AI threat detection that ‘understands you’ critical to thwarting attacks

Check out all the on-demand sessions from the Intelligent Security Summit here.


In today’s complicated cybersecurity landscape, detection is just one part of the puzzle. 

With threat actors exploiting everything from open-source code to AI tools to multi-factor authentication (MFA), security must be adaptive and continuous across an organization’s entire digital ecosystem. 

AI threat detection — or AI that “understands you” — is a critical tool that can help organizations protect themselves, said Toby Lewis, head of threat analysis at cybersecurity platform Darktrace

As he explained, the technology applies algorithmic models that build a baseline of an organization’s “normal.” It can then identify threats — regardless of whether novel or known — and make “intelligent micro-decisions” about potentially suspicious activity.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

“Cyber-attacks have become too fast, too frequent and too sophisticated,” said Lewis. “It’s not possible for a security team to be everywhere, at all times and in real time at scale.”

Protecting ‘sprawling’ digital landscapes

As Lewis pointed out, “there’s no question” that complexity and operational risk go hand in hand as it becomes more difficult to manage and protect the “sprawling digital landscapes” of modern organizations.

Attackers are following data to the cloud and SaaS applications, as well as to a distributed infrastructure of endpoints — from mobile phones and IoT sensors to remotely-used computers. Acquisitions with vast new digital assets and integration of suppliers and partners also put today’s organizations at risk, said Lewis. 

Still, cyber threats are not only more frequent — barriers to entry for would-be bad actors continue to fall. Of particular concern is the growing commercial availability of offensive cyber tools that produce increasing volumes of low-sophistication attacks “bedeviling” CISOs and security teams. 

“We’re seeing cyber-crime commoditized as-a-service, giving threat actors packaged programs and tools that make it easier to set themselves up in business, said Lewis. 

Also of concern is the recent release of ChatGPT — an AI-powered content-creation tool — by OpenAI. ChatGPT could be used to write code for malware and other malicious purposes, Lewis explained.

“Cyber crime actors are continuing to improve their ROI, which will mean constant evolution of tactics in ways that we may not be able to predict,” he said.

AI heavy lifting

This is where AI threat detection can come in. AI “heavy lifting” is crucial to protect organizations against attacks, said Lewis. AI’s always-on, continuously learning capability allows the technology to scale and cover the enormous volume of data, devices and other digital assets under an organization’s purview, regardless of where they are located. 

Typically, Lewis noted, AI models have focused on existing signature-based approaches. However, signatures of known attacks quickly become outdated as attackers rapidly shift tactics. Relying on historical data and past behavior is less effective when it comes to newer threats or “significant deviations in tradecraft by known attackers.”

“Organizations are far too complex for any team of security and IT professionals to have eyes on all data flows and assets,” said Lewis. Ultimately, the sophistication and speed of AI “outstrips human capacity.”

Identifying attacks in real time

Darktrace applies self-learning AI that is “continuously learning an organization, from moment to moment, detecting subtle patterns that reveal deviations from the norm,” said Lewis. 

This “makes it possible to identify attacks in real time, before attackers can do harm,” he said. 

For example, he pointed to the recent widespread Hafnium attacks that exploited Microsoft Exchange. This series of new, unattributed campaigns were identified and disrupted by Darktrace across a number of its customers’ environments.

The company’s AI detected unusual activity and anomalies that, at the time, there was no prior public knowledge of. It was able to stop an attack leveraging a zero-day or a freshly released n-day vulnerability weeks before attribution, Lewis explained.

Otherwise, he pointed out, many organizations were unprepared and vulnerable to the threat until Microsoft disclosed the attacks a few months later. 

As another example, in March 2020 Darktrace detected and stopped several attempts to exploit the Zoho ManageEngine vulnerability, two weeks before the attack was publicly discussed and then attributed to the Chinese threat actor APT41.

“This is where AI works best — autonomously detecting, investigating, and responding to advanced and never-before-seen threats based on a bespoke understanding of the organization being targeted,” said Lewis. 

He pointed out that “these ‘known unknowns,’ which are difficult or impossible to pre-define in an unpredictable threat environment, are the new norm in cyber.”

Using AI to fight AI

Darktrace started out in 2013 using Bayesian inference mathematical models establishing normal behavioral patterns and deviations to those. Now, the company has more than 100 patents and patents-pending coming from its AI Research Center in the UK and its R&D center in The Hague.

Lewis explained that Darktrace’s teams of mathematicians and other multidisciplinary experts are constantly seeking ways to solve cyber challenges with AI and mathematics.

For example, some of its most recent research has looked at how graph theory can be used to continuously map out cross-domain, realistic and risk-assessed attack paths across a digital ecosystem. 

Also, its researchers have tested offensive AI prototypes against its technology. 

“We might call this a war of algorithms,” said Lewis. Or, simply put, fighting AI with AI. 

As he put it: “As we start to see attackers weaponizing AI for nefarious purposes, it will be more critical that security teams use AI to fight AI-generated attacks.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!