Check out all the on-demand sessions from the Intelligent Security Summit here.
With the constant evolution of new technologies in the cybersecurity landscape, malicious actors and cyber bad guys are exploiting new ways to plot shrewder and more successful attacks. According to a report by IBM, the global average cost of a data breach is $4.35 million, and the United States holds the title for the highest data breach cost at $9.44 million, more than double the global average.
In the same study, IBM found that organizations using artificial intelligence (AI) and automation had a 74-day shorter breach life cycle and saved an average of $3 million more than those without. As the global market for AI cybersecurity technologies is predicted to grow at a compound growth rate of 23.6% through 2027, AI in cybersecurity can be considered a welcome ally, aiding data-driven organizations in deciphering the incessant torrent of incoming threats.
AI technologies like machine learning (ML) and natural language processing provide rapid real-time insights for analyzing potential cyberthreats. Furthermore, using algorithms to create behavioral models can aid in predicting cyber assaults as newer data is collected. Together, these technologies are assisting businesses in improving their security defenses by enhancing the speed and accuracy of their cybersecurity response, allowing them to comply with security best practices.
Can AI and cybersecurity go hand-in-hand?
As more businesses are embracing digital transformation, cyberattacks have been equally proliferating. Since hackers conduct increasingly complex attacks on business networks, AI and ML can protect against these sophisticated attacks. Indeed, these technologies are increasingly becoming commonplace tools for cybersecurity professionals in their continuous war against malicious actors.
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
AI algorithms can also automate many tedious and time-consuming tasks in cybersecurity, freeing up human analysts to focus on more complex and vital tasks. This can improve the overall efficiency and effectiveness of security operations. In addition, ML algorithms can automatically detect and evaluate security issues. Some can even respond to threats automatically. Many modern security tools, like threat intelligence, anomaly detection and fraud detection, already utilize ML.
Dick O’Brien, principal intelligence analyst with Symantec’s threat hunter team, said that AI today plays a significant part in cybersecurity and is fundamental in addressing key security challenges.
“We are seeing attackers deploy legitimate software for nefarious purposes or ‘living off the land’ — using tools already on the target’s network for their own purposes,” said O’Brien. “Identifying malicious files is no longer enough. Instead, we now need to be able to identify malicious patterns of behavior, and that’s where AI comes into its own.”
He said that manually inspecting and adjusting policy for every organization doesn’t scale and policies that conform to the lowest standard denominator leave organizations at risk.
“Using AI for adaptive security allows organizations to adapt and mold cybersecurity policies specific to each organization,” he said. “We believe that an AI-based behavior detection technology should be a key component in the stack of any organization with a mature security posture.”
Similarly, there are several ML algorithms that can aid in creating a baseline for real-time threat detection and analysis:
- Regression: Detects correlations between different datasets to understand their relationship. Regression can anticipate operating system calls and find abnormalities by comparing the forecast to an actual system call.
- Clustering: This method helps identify similarities between datasets and groups them based on their standard features. Clustering works directly on new data without considering historical examples or data.
- Classification: Classification algorithms specifically learn from historical observations and try to apply what they learn to new, unseen data. The classification method involves taking artifacts and classifying them under one of several labels. For instance, classifying a file under multiple categories like legitimate software, adware, ransomware or spyware.
“Today’s attack surface and sophistication have grown to a point where AI is now essential to deal with the massive amount of data, IT complexity, and the workforce shortage facing security teams. However, for AI to succeed in security, it must also be explainable, unbiased and trusted for this defense — empowering security analysts to operate the SOC more efficiently,” said Sridhar Muppidi, IBM Fellow and CTO at IBM Security.
Muppidi said that inculcating AI could help companies detect and counter such sophisticated and targeted attacks and not rely solely on traditional one-factor or two-factor authentication.
“AI-based behavioral biometrics can help validate the user based on techniques like keystrokes, time spent on a page, user navigation or mouse movement. AI can help companies evolve from static user validation to more dynamic risk-based authentication mechanisms to address fast-growing online fraud,” he said.
Security challenges in traditional security architectures
Conventionally, security tools only use signatures or attack indicators to identify threats. However, while this technique can quickly identify previously discovered threats, signature-based tools cannot detect threats that have yet to be found. Conventional vulnerability management techniques respond to incidents only after hackers have already exploited the vulnerability; organizations need help managing and prioritizing the large number of new vulnerabilities they come upon daily.
Due to most organizations needing a precise naming convention for applications and workloads, security teams have to spend much of their time determining what set of workloads belongs to a given application. AI can enhance network security by learning network traffic patterns and recommending security policies and functional workload grouping.
Allie Mellen, senior analyst at Forrester, says the biggest challenge for security teams using existing security technologies is that they need to prioritize analyst experience.
“Security technologies do not effectively address the typical security analyst workflow from detection to investigation to response, which makes them difficult to use and puts security analysts at a disadvantage,” said Mellen. “In particular, security technologies are not built to enable investigation – they focus strongly on detection, which leaves analysts spending incredible amounts of time on an investigation that could be more useful in other areas.”
“Traditional cybersecurity systems often only rely on signature and reputation-based methods,” said Adrien Gendre, chief tech and product officer and cofounder at Vade. “Modern
hackers have become more sophisticated and can get around traditional filters in several ways, such as display name spoofing and obfuscating URLs. With AI, a trend spotted in part of the world can be flagged and mitigated before it ever makes its way to another part of the world by analyzing patterns, trends and anomalies.”
AI security is revolutionizing threat detection and response
One of the most effective applications of ML in cybersecurity is sophisticated pattern detection. Cyberattackers frequently hide within networks and avoid discovery by encrypting their communications, using stolen passwords, and deleting or changing records. However, a ML program that detects anomalous activity can catch them in the act. Furthermore, because ML is far quicker than a human security analyst at spotting data patterns, it can detect movements that traditional methodologies miss.
For example, by continually analyzing network data for variations, an ML model can detect dangerous trends in email transmission frequency that may lead to the use of email for an outbound assault. Furthermore, ML can dynamically adjust to changes by consuming fresh data and responding to changing circumstances.
Ed Bowen, cyber and strategic risk managing director at Deloitte, believes that AI works in conjunction with fundamental good cyberhygiene, such as network segmentation, to isolate point-of-sale details and PII.
“AI can help augment network monitoring of each segment for signs of lateral movement and advanced persistent threats,” said Bowen. “In addition, AI-driven reinforcement learning can be used as a ‘red team’ to probe networks for vulnerabilities that can be reinforced to reduce the chances of a breach.”
Bowen also said that AI-driven behavioral analytics could prove to be highly useful in identity management.
“Maintaining data on user behavior and then using pattern recognition to identify high-risk activities on the network create effective signals in threat detection. Organizations can also use deep learning to identify the anomalous activity as adversaries scan network assets seeking vulnerabilities,” he said. “But, the cyber platform(s) architecture must be well designed and maintained so AI can effectively be applied.”
Likewise, Katherine Wood, senior data scientist at Signifyd, said that AI-based behavioral analytics and anomaly detection technologies could be an effective solution to match the speed and scale at which automated fraud operates.
“When a fraudster gains access to an account or identifies viable stolen financials, bots can also be used to mass-purchase valuable products at an incredibly rapid pace. Using AI-based detection and fraud protection, organizations today can rapidly mitigate such threats,” said Wood. “The most advanced fraud protection solutions now rely on ML to process thousands of signals in a transaction to instantly detect and block fraudulent orders and automated attacks. In addition, AI’s broad visibility enables security models to detect sudden changes in behavior that might indicate account takeover, an unusual spike in failed login attempts that heralds an automated credential stuffing attack, or impossibly fast browsing and purchasing that indicates bot activity.”
But David Movshovitz, cofounder and CTO of RevealSecurity, has a differing opinion. According to him, user and entity behavioral analytics have failed due to the vast dissimilarities between applications. Therefore, models have been developed only for limited application layer scenarios, such as in the financial sector (credit card, anti-money laundering, etc.).
“Rule-based detection solutions such as anomaly detection are notoriously problematic because they generate numerous false positives and false negatives, and they don’t scale across the many applications,” said Movshovitz.
He further explained that the security market adopted statistical analysis to augment rule-based solutions in an attempt to provide more accurate detection for the infrastructure and access layers. However, they failed to deliver dramatically increased accuracy and reduced false positive alerts that were promised, due to a fundamentally mistaken assumption that statistical quantities, such as the average daily number of activities, can characterize user behavior.
“This mistaken assumption is built into behavioral analytics and anomaly detection technologies, which characterize a user by an average of activities. But, in reality, people don’t have ‘average behaviors,’ and it is thus futile to try and characterize human behavior with quantities such as ‘average,’ ‘standard deviation,’ or ‘median’ of a single activity,” Movshovitz told VentureBeat.
He also said that detecting these breaches usually consists of manually sifting through tons of log data from multiple sources when there is a suspicion. “This makes application detection and response a massive pain point for enterprises, particularly with their core business applications. Today, CISOs should focus instead on learning users’ multiple typical activity profiles.”
Commenting on the same, Forrester’s Mellen said that validation of detection efficacy could be a potential solution to tackle such AI hazards and reduce false positives.
“One of the interesting ways ML is used in security tools today, which is not often discussed, is in the validation of detection efficacy. We often associate ML with detecting an attack instead of validating if that detection is accurate,” said Mellen. “Validation of detection efficacy can not only help reduce false positives, but also be used to evaluate analyst performance, which, when used in aggregate, can help security teams understand how certain log sources or processes are working well and supporting analyst experience.”
What to expect from AI-based security in 2023
Deloitte’s Bowen predicts that AI will drive vastly improved detection efficacy and human resource optimization. However, he also says that organizations that fail to use AI will become soft targets for adversaries leveraging this technology.
“Threats that can’t be detected on traditional stacks today will be detected using these new tools, platforms and architectures. When possible, we will see more AI/ML models being pushed to the edge to prevent, detect and respond autonomously,” he said. “Identity management will be improved with better compliance, resulting in a better protective posture for AI-driven cybersecurity organizations. We’ll see higher levels of negative impact to those organizations that are late using AI as part of their comprehensive stack.”
“The current applications of AI in cybersecurity are focused on what we call ‘narrow AI’ — training the models on a specific set of data to produce predefined results,” IBM’s Muppidi added. “In the future, and even as soon as 2023, we see great potential for using ‘broad AI’ models in cybersecurity — training a large foundation model on a comprehensive dataset to detect new and elusive threats faster.”
“As cybercriminals constantly evolve their tactics, these broad AI applications would unlock more predictive and proactive security use cases, allowing us to stay ahead of attackers vs. adapting to existing techniques.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Victor Dey
Source: Venturebeat