AI & RoboticsNews

AI-powered intelligent security makes the hybrid enterprise possible

Join today’s leading executives online at the Data Summit on March 9th. Register here.


At A.S. Watson Group, a global health and beauty retailer, shifting employees into the home meant that many of the company’s critical cybersecurity tools were no longer effective. Vulnerability scanning and automatic updates for endpoint protection, for instance, were only configured to work on an internal corporate network.

And so, in the midst of the pandemic, the company shifted gears and added a new vendor to fill the gap in security coverage created by having remote workers outside the corporate perimeter. The vendor that A.S. Watson selected was Vectra, which specializes in offering artificial intelligence (AI) for threat detection and response across the customer’s environments — regardless of where their users might be located geographically. 

For a global company with a complex hybrid environment like A.S. Watson, an AI-driven security approach was the only way to get the job done. And Vectra’s tool deployed rapidly, since it didn’t require an agent to be deployed onto endpoints such as laptops and desktop PCs.

“You simply hook a sensor into your network and almost instantly you see every active host in your network,” said Arjan Hurkmans, a cybersecurity manager at AS Watson, in an email. “The AI does an excellent job in figuring out what behavior is legit or not. We currently monitor 50,000 unique IP addresses and only a handful of detections need manual investigation from a SOC [security operations center] analyst.”

In that way, “our 24/7 SOC doesn’t waste time on detections that do not matter and can act swiftly to anything that needs to be investigated,” Hurkmans said. “We want to see a cyberthreat as fast as possible and keep our business going.”

Why Ai is an essential technology

For countless companies around the world, being able to keep business going with a remote or hybrid workforce during the pandemic has been essential. And while the cybersecurity challenges of having workers in the home have been massive, the use of advanced AI, machine learning (ML) and deep learning technologies in many security tools has been among the key factors in making this all possible. 

Put another way, intelligent security is having its moment.

AI for security has been an “enabling foundation” that has allowed remote work to function at scale during the pandemic, said Mark Driver, a research vice president at Gartner. Because of all the differences of working in a home versus in an office, the ability for companies to determine what constitutes normal behavior in a remote work setting, from a security perspective, is monumentally more difficult.

“You end up with significant levels of false positives, which can slow your security systems to a halt,” Driver said. “You have to have a way to cut through that — reduce those false positives, find the signal in that noise — without overly restricting the remote worker.”

And while AI for security is not a silver bullet, what it’s very good at is analyzing and watching the behavior in an environment and quickly adapting to the changes. 

In this case, the AI “understands if there are changes happening, because more employees are accessing [corporate resources] remotely,” Driver said. “It learns to adapt to those changes and can reduce those false positives — understanding what is normal but an outlier, and what is potentially a dangerous attack.”

Thus, while intelligent security hasn’t gotten as much attention during the pandemic as has the role of collaboration tools and cloud software, it’s ultimately played a similarly essential part in making the past two years possible for businesses. And as many workforces settle in to a permanently hybrid approach, the use of AI/ML in security tools will only become more crucial.

The new cybersecurity perimeter

Even before the pandemic began, 69% of companies felt they could not effectively defend against cyberattacks without AI capabilities, according to Capgemini research from 2019. It’s a fair assumption that the number is much higher now — if not verging on unanimity — amid the experience of trying to securely enable remote workers during the pandemic.

According to numerous findings, threat actors have seized on the shift of workers into the home to escalate attacks including phishing and social engineering — leading to malware deployment, such as ransomware, and data theft. “These bad actors out there have recognized that it’s no longer about attacking the perimeter in the corporation. What they’re doing now is they’re attacking the human — the person,” said Patrick Harr, CEO of AI-powered security vendor SlashNext.

In other words, with the shift to remote work, “the users are now the new perimeter of security,” Harr said.

Email phishing attacks have surged as high as 220% above normal at points during the pandemic, according to F5 — while the total number of ransomware attacks more than doubled in 2021, SonicWall reports. Data leaks related to ransomware jumped 82% last year, CrowdStrike data shows, and 79% of IT teams report an increase in endpoint-related breaches, according to HP Wolf Security. 

Attackers quickly pinpointed the shift to remote work as an ideal scenario for their ends: Reduced security protections, increased email communications and general stress and confusion. Attackers targeting the remote workforce “know they’re distracted. They know they’re busy,” Harr said.

Threat actors also embraced new ways to target workers: Maybe the workers themselves won’t click on a phishing email — but maybe their kid who uses the same computer will.

“The threat surface for companies has expanded, because it’s moved into the house — and you have no control over what’s going on in that household,” said Chuck Everette, director of cybersecurity advocacy at Deep Instinct, which offers deep learning technologies for protecting endpoints. “Cyber criminals go after the weakest link in an organization’s defenses — frequently, untrained individuals.”

Autonomous security 

To combat these tactics, customers have turned to intelligent security companies such as Deep Instinct in order to head off malicious cyberthreats before they can even reach their remote workers.

The company’s deep learning algorithm is “fully autonomous,” trained on huge sets of raw data samples, and ultimately capable of predicting known and unknown attacks before they take place, Everette said. The technology can do this because it “thinks like a human brain,” he said.

At financial services firm Equity Trustees, Deep Instinct and its deep learning approach has proven invaluable amid the shift of workers into the home, according to the company’s chief technology officer, Phing Lee. The Deep Instinct technology has brought the ability to actively detect and stop new threats from even entering the environment — across every device used by employees — including sophisticated advanced persistent threats and previously unknown, zero day attacks, Lee said.

Meanwhile, false positives — which previously ate up 40% of the company’s SOC resources — have been “dramatically minimized,” he said. For endpoint protection alerts, false positives have been reduced by 95% for the company using Deep Instinct’s solution.

Because the deep learning technology prevents threats from executing, “our security team can dedicate more time to understand where the threats originate from, analyze those threats and take steps to improve our overall security posture,” Lee said in an email.

Deep learning also comes into play as one of the AI/ML technologies behind Ivanti’s Neurons solution suite. Ivanti Neurons can be used for securing endpoints with capabilities including anomaly detection and self-healing for issues such as vulnerabilities and configuration drift, says Ivanti president Nayaki Nayyar. 

The Ivanti Neurons technology can also automatically discover all of a customer’s assets, and deliver intelligence about risks from unpatched devices — both of which have proven extremely useful for businesses with distributed workforces, Nayyar said.

At SouthStar Bank, deploying Ivanti Neurons for these use cases has enabled the bank’s IT staff to more easily handle many tasks around securing its remote workers, according to SouthStar Bank IT specialist Jesse Miller. “Without these technologies, it would’ve been extremely difficult to make this all happen,” he said.

Behavior-based security

Another frontier for intelligent security involves using AI/ML technology to assess user behavior, providing new avenues for improving security controls. 

Darktrace, a provider of self-learning AI for security, has been a pioneer in terms of behavior-based security approaches using artificial intelligence. In January, spurred by the need to better protect distributed workforces, the company for the first time unveiled capabilities for autonomous response on endpoint devices.

Using AI/ML, the tool assesses the behavior of users on endpoints and learns what’s normal for them, and what represents a deviation. It then prevents the anomalous activity from taking place, while allowing any normal behavior to continue. 

All of this is done autonomously on the endpoint, and is tailored to the exact context of the user and device, says Max Heinemeyer, director of threat hunting at Darktrace. For instance, the tool could curtail one specific type of activity that has been deemed abnormal, and do so for just a limited amount of time to give the security team a chance to catch up, Heinemeyer said.

This avoids the major pitfalls of many security technologies — which either block too much, and interrupt productivity, or don’t block enough, he said. Instead, the technology is “actually responding in real time, based on the context and situation,” he said. “It’s behavioral containment.”

At Groupement Hospitalier Territorial de Dordogne in France, several weeks after deploying Darktrace technology in mid-2021, the hospital system was struck with a ransomware attack, said CISO Vincent Genot. But Darktrace’s autonomous response capability intervened to block the attack before it could cause any interruption to operations. The hospital system was able to “continue working, continue being connected to the internet and continue to care for patients even while under attack,” Genot said in an email.

On a more day-to-day level, Darktrace’s AI-driven technology is “clever enough” to know the difference between unusual behavior that is harmless – such as one an employee working from a café – and a malicious attack, he said.

And with a greater number of employees working remotely, the fact that the vendor’s AI can now defend endpoint devices is a “huge game-changer,” Genot said. As one recent example, Darktrace’s AI spotted that an employee had connected their laptop to a potentially insecure Wi-Fi network. “The AI flagged this immediately, allowing us to act before attackers could compromise our organization,” Genot said.

Importantly, because Darktrace offers security for cloud, network, software-as-a-service and email, in addition to endpoint, “the contextual awareness the algorithms gain from other parts of our digital estate is beneficial in stopping endpoint attacks,” Genot said. “Darktrace is our AI-powered eyes looking across the entire digital business.”

How AI/ML prevents phishing exploits

When it comes to email security, AI/ML has been used for years for automatically quarantining malicious emails. But some vendors, including Darktrace, aim to offer enhanced email security by using AI/ML for analyzing user behavior in email applications — another way that behavior is being factored in for improving cybersecurity.

Doing so can unearth and address additional security risks, ranging from unintended errors to insider threats, said Kevin Lynch, CEO at security consultancy Optiv. By correlating and learning the behavior of workers, “you start to see the behavioral tendencies of certain participants in your environment versus others,” Lynch said. 

Such systems can then add more rules for workers that need them — and fewer rules for those that don’t. For companies that want to address the critical human element of email security, a static policy set will not do the job, according to Lynch. But for assessing the evolution of behaviors and actions, he said, “machine learning is perfect for that.”

This capability proved itself in a major way when the workforce shifted into the home environment, and email behaviors suddenly changed, said Josh Yavor, chief information security officer at email security vendor Tessian. 

Rather than depending on a set of rules to govern email security, Tessian’s product uses machine learning to evaluate behavior and adapt to changes — without customers needing to do very much, Yavor said. The technology can “establish new meanings of what normal looks like” automatically, he said. “This type of technological approach has the value of being flexible in situations exactly like this.”

Like many companies, Waverton Investment Management saw a rise in phishing campaigns and an increase in the number of impersonation attempts following the shift to remote work, said  Mudassar Ulhaq, chief information officer at the investment management firm.

“The challenge with remote working is that your people are in isolated environments,” Ulhaq said in an email. “They can’t check with their colleagues if the email is legitimate, and you’re ultimately asking them to make the call, alone, on whether an email is safe or not.”

Tessian’s use of ML to understand what normal user behavior looks like has thus been key to detecting and preventing threats during the pandemic, he said. 

“Manually fine-tuning your security tools to every single individual’s risk profile is incredibly time-consuming,” Ulhaq said. But with Tessian’s ML-driven solution, Waverton has been able to implement a security strategy that is “tailored to protect every employee, without burdening the security team.”

Evolving tactics

AI/ML technology can enable email security tools to automatically adapt to the ways that attackers are evolving, as well, said Aaron Higbee, chief technology officer at email security vendor Cofense. “We don’t know how an attacker will evolve their tactic inside of an email to bypass automated filtering technology,” Higbee said. “We just know that they will.”

To help counter these changing tactics, Cofense deploys ML-driven computer vision technology that analyzes the visual appearance of both the email body and any web pages that are linked out to from the email. And if something looks off, the email is prevented from reaching the user.

Thus, while the text that a phishing email uses will inevitably keep changing in order to sneak past traditional filters, Cofense spots the visual clues that point toward malicious intent, Higbee said.

At managed IT services firm Rader, the Cofense solution has excelled at identifying impostor emails and purging them from user inboxes during the pandemic, said Rader chief information security officer Tim Fournet. During a time when verifying the authenticity of emails has become more challenging for workers, Cofense’s AI/ML capabilities “have increased the reliability of our email filters,” Fournet said. 

Beyond email, a growing number of phishing attacks are now taking place in other messaging avenues — including mobile SMS, Facebook Messenger and increasingly even LinkedIn, said SlashNext’s Harr.

His company’s solution uses AI/ML technologies including computer vision and natural language processing — trained on massive quantities of data — to understand the behavior and the intent in messages, and detect phishing attempts with high accuracy. Crucially, the solution works across email, browser and mobile, including personal communication channels.

With workers in the home and often working on mobile devices, “it’s important to be able to protect the personal channels as well as the corporate channels,” Harr said. 

AI’s crucial role

Ultimately, it’s just one of the many examples of things that businesses now need to rethink, as they look to secure their workers and their corporate data in the age of remote and hybrid work. And to make it all happen, intelligent security should play a pivotal role.

In the pre-pandemic world, says Oliver Tavakoli, chief technology officer at Vectra, users would sit behind a firewall, and access applications and data that were also behind a firewall.

Now, “you sit at home, connect through the internet, and access another thing that is outside of a firewall connected to the internet,” Tavakoli said. “In that world, you just open yourself up to much more threat.”

To effectively counter this increased cyberthreat, the key for security teams is to get “very good at separating the signal from the noise,” he said. “And you can’t do that without AI and ML.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Kyle Alspach
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!