AI & RoboticsNews

AI, automation, and the cybersecurity skills gap

This article is part of a VB special issue. Read the full series here: AI and Security.


The cybersecurity skills shortage is well documented, but the gap seems to be widening. The 2019 Cybersecurity Workforce study produced by nonprofit (ISC)² looked at the cybersecurity workforce in 11 markets. The report found that while 2.8 million people currently work in cybersecurity roles, an additional 4 million were needed — a third more than the previous year — due to a “global surge in hiring demand.”

As companies battle a growing array of external and internal threats, artificial intelligence (AI), machine learning (ML), and automation are playing increasingly large roles in plugging that workforce gap. But to what degree can machines support and enhance cybersecurity teams, and do they — or will they — negate the need for human personnel?

These questions permeate most industries, but the cost of cybercrime to companies, governments, and individuals is rising precipitously. Studies indicate that the impact of cyberattacks could hit a heady $6 trillion by 2021. And the costs are not only financial. As companies harness and harvest data from billions of individuals, countless high-profile data breaches have made privacy a top concern. Reputations — and in some cases people’s lives — are on the line.

Against that backdrop, the market for software to protect against cyberattacks is also growing. The global cybersecurity market was reportedly worth $133 billion in 2018, and that could double by 2024. The current value of the AI-focused cybersecurity market, specifically, is pegged at around $9 billion, and it could reach $38 billion over the next six years.

We checked in with key people from across the technology spectrum to see how the cybersecurity industry is addressing the talent shortage and the role AI, ML, and automation can play in these efforts.

The ‘click fraud czar’

“I think the concern around the cybersecurity skills gap and workforce shortfall is a temporary artifact of large companies scrambling to try to recruit more people to perform the same types of ‘commodity’ cybersecurity activities — for example, monitoring security logs and patching vulnerabilities,” said Shuman Ghosemajumder, a former Googler who most recently served as chief technology officer at cybersecurity unicorn Shape Security.

Ghosemajumder compares this to “undifferentiated heavy lifting,” a term first coined by Amazon’s Jeff Bezos to describe the traditional time-consuming IT tasks companies carry out that are important but don’t contribute a great deal to the broader mission. Bezos was referring to situations like developers spending 70% of their time working on servers and hosting, something Amazon sought to address with Amazon Web Services (AWS).

Similar patterns could emerge in the cybersecurity realm, according to Ghosemajumder.

“Any time companies are engaged in ‘undifferentiated heavy lifting,’ that points to the need for a more consolidated, services-based approach,” he said. “The industry has been moving in that direction, and that helps significantly with the workforce shortfall — companies won’t need to have such large cybersecurity teams over time, and they won’t be competing for the exact same skills against one another.”

Above: Shuman Ghosemajumder

Ghosemajumder was dubbed the “click fraud czar” during a seven-year stint at Google that ended in 2010. He developed automated techniques and systems to combat automated (and human-assisted) “click fraud,” when bad actors fraudulently tap on pay-per-click (PPC) ads to increase site revenue or diminish advertisers’ budgets. Manually reviewing billions of transactions on a daily basis would be impossible, which is why automated tools are so important. It’s not about combating a workforce shortfall per se; it’s about scaling security to a level that would be impossible with humans alone.

Ghosemajumder said the most notable evolution he witnessed with regard to AI and ML was in offline “non-real-time” detection.

“We would zoom out and analyze the traffic of an AdSense site, or thousands of AdSense sites, over a longer time period, and anomalies and patterns would emerge [that] indicated attempts to create click fraud or impression fraud,” he continued. “AI and ML were first hugely beneficial, and then became absolutely essential in finding that activity at scale so that our teams could determine and take appropriate action in a timely fashion. And even taking appropriate action was a fully automated process most of the time.”

In 2012, Ghosemajumder joined Shape Security, which reached a $1 billion valuation late last year and was gearing up for an IPO. Instead, networking giant F5 came along last month and bought the company for $1 billion, with Ghosemajumder now serving as F5’s global head of AI.

Shape Security focuses on helping big businesses (e.g., banks) prevent various types of fraud — such as “imitation attacks,” where bots attempt to access people’s accounts through credential stuffing. The term, first coined by Shape Security cofounder Sumit Agarwal, refers to attempts to log into someone’s account using large lists of stolen usernames and passwords.

This is another example of how automation is increasingly being used to combat automation. Many cyberattacks center around automated techniques that prod online systems until they find a way in. For example, an attacker may have an arsenal of stolen credit card details, but it would take too long to manually test each one. Instead, an attacker performs a check once and then trains a bot to carry out that same approach on other card details until they have discovered which ones are usable.

Just as it’s relatively easy to carry out large-scale cyberattacks through imitation and automation, Shape Security uses automation to detect such attacks. Working across websites, mobile apps, and any API endpoint, Shape Security taps historical data, machine learning, and artificial intelligence to figure out whether a “user” is real, employing signals such as keystrokes, mouse movements, and system configuration details. If the software detects what it believes to be a bot logging into an account, it blocks the attempt.

While we’re now firmly in an era of machine versus machine cyberwarfare, the process has been underway for many years.

“Automation was used 20-plus years ago to start to generate vast quantities of email spam, and machine learning was used to identify it and mitigate it,” Ghosemajumder explained. “[Good actors and bad actors] are both automating as much as they can, building up DevOps infrastructure and utilizing AI techniques to try to outsmart the other. It’s an endless cat-and-mouse game, and it’s only going to incorporate more AI approaches on both sides over time.”

To fully understand the state of play in AI-powered security, it’s worth stressing that cybersecurity spans many industries and disciplines. According to Ghosemajumder, fraud and abuse are far more mature in their use of AI and ML than approaches like vulnerability searching.

“One of the reasons for this is that the problems that are being solved in those areas [fraud and abuse] are very different from problems like identifying vulnerabilities,” Ghosemajumder said. “They are problems of scale, as opposed to problems of binary vulnerability. In other words, nobody is trying to build systems that are 100% fraud proof, because fraud or abuse is often manifested by ‘allowed’ or ‘legitimate’ actions occurring with malicious or undesirable intent. You can rarely identify intent with infallible accuracy, but you can do a good job of identifying patterns and anomalies when those actions occur over a large enough number of transactions. So the goal of fraud and abuse detection is to limit fraud and abuse to extremely low levels, as opposed to making a single fraud or abuse transaction impossible.”

Machine learning is particularly useful in such situations — where the “haystack you’re looking for needles in,” as Ghosemajumder puts it, is vast and requires real-time monitoring 24/7.

Curiously, another reason AI and ML evolved more quickly in the fraud and abuse realm may be down to industry culture. Fraud and abuse detection wasn’t always associated with cybersecurity; those spheres once operated separately inside most organizations. But with the rise of credential stuffing and other attacks, cybersecurity teams became increasingly involved.

“Traditionally, fraud and abuse teams have been very practical about using whatever works, and success could be measured in percentages of improvement in fraud and abuse rates,” Ghosemajumder said. “Cybersecurity teams, on the other hand, have often approached problems in a more theoretical way, since the vulnerabilities they were trying to discover and protect against would rarely be exploited in their environment in ways they could observe. As a result, fraud and abuse teams started using AI and ML more than 10 years ago, while cybersecurity teams have only recently started adopting AI- and ML-based solutions in earnest.”

For now, it seems many companies use AI as an extra line of defense to help them spot anomalies and weaknesses, with humans on hand to make the final call. But there are hard limits to how many calls humans are able to make in a given day, which is why the greatest benefit of cybersecurity teams using AI and humans in tandem could simply be to ensure that machines improve over time.

“The optimal point is often to use AI and automation to keep humans making the maximum number of final calls every day — no more, but also no less,” Ghosemajumder noted. “That way you get the maximum benefit from human judgment to help train and improve your AI models.”

Facebook-sized problems

“Scalability” is a theme that permeates any discussion around the role of AI and ML in giving cybersecurity teams sufficient resources. As one of the world’s biggest technology companies, this is something Facebook knows only too well.

Dan Gurfinkel is a security engineering manager at Facebook, supporting a product security team that is responsible for code and design reviews, scaling security systems to automatically detect vulnerabilities, and addressing security threats in various applications. According to Gurfinkel’s experiences at Facebook, the cybersecurity workforce shortfall is real — and worsening — but things could improve as educational institutions adapt their offerings.

“The demand for security professionals, and the open security roles, are rising sharply, often faster than the available pool of talent,” Gurfinkel told VentureBeat. “That’s due in part to colleges and universities just starting to offer courses and certification in security. We’ve seen that new graduates are getting more knowledgeable year over year on security best practices and have strong coding skills.”

But is the skills shortage really more pronounced in cybersecurity than in other fields? After all, the tech talent shortage spans everything from software engineering to AI. In Gurfinkel’s estimation, the shortfall in cybersecurity is indeed more noticeable than in other technical fields, like software engineering.

“In general, I’ve found the number of software engineering candidates is often much larger than those who are specialized in security, or have a special expertise within security, such as incident response or computer emergency response [CERT],” he said.

It’s also worth remembering that cybersecurity is a big field requiring a vast range of skill sets and experience.

“For mid-level and management roles, in particular, sometimes the candidate pool can be smaller for those who have more than five years of experience working in security,” Gurfinkel added. “Security is a growing field that’s becoming more popular, so I would expect that to change in the future.”

Facebook is another great example of how AI, ML, and automation are being used not so much to overcome gaps in the workforce but to enable security on a scale that would otherwise be impossible. With billions of users across Facebook, Messenger, Instagram, and WhatsApp, the sheer size and reach of the company’s software makes it virtually impossible for humans alone to keep its applications secure. Thus, AI and automated tools become less about plugging workforce gaps and more about enabling the company to keep on top of bugs and other security issues. This is also evident across Facebook’s broader platform, with the social networking giant using AI to automate myriad processes, from detecting illegal content to assisting with translations.

Facebook also has a record of open-sourcing AI technology it builds in-house, such as Sapienz, a dynamic analysis tool that automates software testing in a runtime environment. In August 2019, Facebook also open-sourced Zoncolan, a static analysis tool that can scan the company’s 100 million lines of code in less than 30 minutes to catch bugs and prevent security issues from arising in the first place. It effectively helps developers avoid introducing vulnerabilities into Facebook’s codebase and detect any emerging issues, which, according to Facebook, would take months or years to do manually.

“Most of our work as security engineers is used to scale the detection of security vulnerabilities,” Gurfinkel continued. “We spend time writing secure frameworks to prevent software engineers from introducing bugs in our code. We also write static and dynamic analysis tools, such as Zoncolan, to prevent security vulnerabilities earlier in the development phase.”

In 2018, Facebook said Zoncolan helped identify and triage well over 1,000 critical security issues that required immediate action. Nearly half of the issues were flagged directly to the code author without requiring a security engineer.

Above: Facebook’s Zoncolan helped find and “triage” more than 1,000 critical security issues in 2018 alone

This not only demonstrates how essential automation is in large codebases, it also illustrates ways it can empower software developers to manage bugs and vulnerabilities themselves, thus lightening security teams’ workloads.

It also serves as a reminder that humans are still integral to the process, and likely will be long into the future, even as their roles evolve.

“When it comes to security, no company can solely rely on automation,” Gurfinkel said. “Manual and human analysis is always required, be it via security reviews, partnering with product teams to help design a more secure product, or collaborating with security researchers who report security issues to us through our bug bounty program.”

According to Gurfinkel, static analysis tools — that is, tools used early in the development process before the code is executed — are particularly useful for identifying “standard” web security bugs, such as OWASP’s top 10 vulnerabilities, as it can surface straightforward issues that need to be addressed immediately. This frees up human personnel to tackle higher priority issues.

“While these tools help get things on our radar quickly, we need human analysis to make decisions on how we should address issues and come up with solutions for product design,” Gurfinkel added.

(AI) security-as-a-service

As BlackBerry has transitioned from phonemaker to enterprise software provider, cybersecurity has become a major focus for the Canadian tech titan, largely enabled by AI and automation. Last year, the company shelled out $1.4 billion to buy AI-powered cybersecurity platform Cylance. BlackBerry also recently launched a new cybersecurity research and development (R&D) business unit that will focus on AI and internet of things (IoT) projects.

BlackBerry is currently in the process of integrating Cylance’s offerings into its core products, including its Unified Endpoint Management (UEM) platform that protects enterprise mobile devices, and more recently its QNX platform to safeguard connected cars. With Cylance in tow, BlackBerry will enable carmakers and fleet operators to automatically verify drivers, address security threats, and issue software patches. This integration leans on BlackBerry’s CylancePersona, which can identify drivers in real time by comparing them with a historical driving profile. It looks at things like steering, braking, and acceleration patterns to figure out who is behind the wheel.

This could be used in multiple safety and security scenarios, and BlackBerry envisages the underlying driving pattern data also being used by commercial fleets to detect driver fatigue, enabling remote operators to contact the driver and determine whether they need to pull off the road.

Cylance and BlackBerry

Above: BlackBerry and Cylance bring driver verification to automobiles

Moreover, with autonomous vehicles gearing up for prime time, safety is an issue of paramount importance — and one companies like BlackBerry are eager to capitalize on.

Back in 2016, BlackBerry launched the Autonomous Vehicles Innovation Centre (AVIC) to “advance technology innovation for connected and autonomous vehicles.” The company has since struck some notable partnerships, including with Chinese tech titan Baidu to integrate QNX with Baidu’s autonomous driving platform. Even though BlackBerry CEO John Chen believes autonomous cars won’t be on public roads for at least a decade, the company still has to plan for that future.

Here again, the conversation comes back to cybersecurity and the tools and workforce needed to to maintain it. Much as Facebook is scaling its internal security setup, BlackBerry is promising its business customers it can scale cybersecurity, improve safety, and enable services that would not be possible without automation.

“AI and automation are more about scalability, as opposed to plugging specific skills gaps,” BlackBerry CTO Charles Eagan told VentureBeat. “AI is also about adding new value to customers and making things and enabling innovations that were previously not possible. For example, AI is going to be needed to secure an autonomous vehicle, and in this case it isn’t about scalability but rather about unlocking new value.”

Similarly, AI-powered tools promise to free up cybersecurity professionals to focus on other parts of their job.

“If we remove 99% of the cyberthreats automatically, we can spend much more quality time and energy looking to provide security in deeper and more elaborate areas,” Eagan continued. “The previous model of chasing AV (antivirus) patterns would never scale to today’s demands. The efficiencies introduced by quality, preventative AI are needed to simply keep up with the demand and prepare for the future.”

Above: BlackBerry CTO Charles Eagan

AI-related technologies are ultimately better than humans at tackling certain problems, such as looking at large data sets and spotting patterns and automating tasks. But people also have skills that are pretty difficult for machines to top.

“The human is involved in more complex tasks that require experience, context, critical thinking, and judgement,” Eagan said. “The leading-edge new attacks will always require humans to triage and look for areas where machine learning can be applied. AI is very good at quantifying similarities and differences and therefore identifying novelties. Humans, on the other hand, are better at dealing with novelties, where they can combine experience and analytical thinking to respond to a situation that has not been seen before.”

Learning curve

Even with this symbiosis between humans and machines, the cybersecurity workforce shortfall is increasing — largely due to factors such as spreading internet connectivity, escalating security issues, growing privacy concerns, and subsequent demand spikes. And the talent pool, while expanding in absolute terms, simply can’t keep up with demand, which is why more needs to be done from an education and training perspective.

“As the awareness of security increases, the shortage is felt more acutely,” Eagan said. “We as an industry need to move quickly to attack this issue on all fronts — a big part of which is sparking interest in the field at a young age, in the hope that by the time these same young people start looking at the next stage in their education, they gravitate to the higher education institutions out there that offer cybersecurity as a dedicated discipline.”

For all the noise BlackBerry has been making about its investments in AI and security, it is also investing in the human element. It offers consulting services that include cybersecurity training courses, and it recently launched a campaign to draw more women into cybersecurity through a partnership with the Girl Guides of Canada.

Similar programs include the U.S. Cyber Challenge (USCC), operated by Washington, D.C.-based nonprofit Center for Strategic and International Studies (CSIS), which is designed to “significantly reduce the shortage” in the cyber workforce by delivering programs to identify and recruit a new generation of cybersecurity professionals. This includes running competitions and cyber summer camps through partnerships with high schools, colleges, and universities.

Above: USCC cyber camp

Efforts to nurture interest in cybersecurity from a young age are already underway, but there is simultaneously a growing awareness that higher education programs geared toward putting people in technical security positions aren’t where they need to be.

According to a 2018 report from the U.S. Departments of Homeland Security and Commerce, employers are “expressing increasing concern about the relevance of certain cybersecurity-related education programs in meeting the real needs of their organization,” with “educational attainment” serving as a proxy for actual applicable knowledge, skills, and abilities (KSAs). “For certain work roles, a bachelor’s degree in a cybersecurity field may or may not be the best indicator of an applicant’s qualifications,” the report noted. “The study team found many concerns regarding the need to better align education requirements with employers’ cybersecurity needs and how important it is for educational institutions to engage constantly with industry.”

Moreover, the report surfaced concerns that some higher education cybersecurity courses concentrated purely on technical knowledge and skills, with not enough emphasis on “soft” skills, such as strategic thinking, problem solving, communications, team building, and ethics. Notably, the report also found that some of the courses focused too much on theory and too little on practical application.

For companies seeking personnel with practical experience, a better option could be upskilling — ensuring that existing security workers are brought up to date on the latest developments in the security threat landscape. With that in mind, Immersive Labs, which recently raised $40 million from big-name investors including Goldman Sachs, has set out to help companies upskill their existing cybersecurity workers through gamification.

Immersive Labs was founded in 2017 by James Hadley, a former cybersecurity instructor for the U.K.’s Government Communications Headquarters (GCHQ), the country’s intelligence and security unit. The platform is designed to help companies engage their security workforce in practical exercises — which may involve threat hunting or reverse-engineering malware — from a standard web browser. Immersive Labs is all about using real-world examples to keep things relatable and current.

Immersive Labs

Above: Taking a cybersecurity skills test in Immersive Labs

While much of the conversation around AI seems to fall into the “humans versus machines” debate, that isn’t helpful when we’re talking about threats on a massive scale. This is where Hadley thinks Immersive Labs fills a gap — it’s all about helping people find essential roles alongside the automated tools used by many modern cybersecurity teams.

“AI is indeed playing a bigger role in the security field, as it is in many others, but it’s categorically not a binary choice between human and machine,” Hadley told VentureBeat in an interview last year. “AI can lift, push, pull, and calculate, but it takes people to invent, contextualize, and make decisions based on morals. Businesses have the greatest success when professionals and technologies operate cohesively. AI can enhance security, just as [AI] can be weaponized, but we must never lose sight of the need to upskill ourselves.”

Other companies have invested in upskilling workers with a proficiency in various technical areas — Cisco, for example, launched a $10 million scholarship to help people retrain for specific security disciplines. Shape Security’s Ghosemajumder picked up on this, noting that some companies are looking to retrain technical minds for a new field of expertise.

“Many companies are not trying to hire cybersecurity talent at all, but instead find interested developers, often within the company, and train them to be cybersecurity professionals — if they are interested, which many are these days,” Ghosemajumder explained.

There is clearly a desire to get more people trained in cybersecurity, but one industry veteran thinks other factors limit the available talent pool before the training process even begins. Winn Schwartau is founder of the Security Awareness Company and author of several books — most recently Analogue Network Security, in which he addresses internet security with a “mathematically based” approach to provable security.

According to Schwartau, there is a prevailing misconception about who makes a good cybersecurity professional. Referring to his own experiences applying for positions with big tech companies back in the day, Schwartau said he was turned down for trivial reasons — once for being color-blind, and another time for not wanting to wear a suit. Things might not be quite the same as they were in the 1970s, but Schwartau attributes at least some of today’s cybersecurity workforce problem to bias about who should be working in the field.

“In 2012, when then-Secretary for Homeland Security Janet Napolitano said ‘We can’t find good cybersecurity people,’ I said, that’s crap — that’s just not true,” Schwartau explained. “What you mean is you can’t find lily-white, perfect people who have never done anything wrong, who meet your myopic standards of ‘normal,’ and who don’t smoke weed. No wonder you can’t find talent. But the worst part is, we don’t have great training grounds for the numbers of people who ‘want in’ to security. Training is expensive, and we are training on the wrong topics.”

Will the shortfall get worse? “Much worse, especially as anthro-cyber-kinetic (human, computer, physical) systems are proliferating,” Schwartau continued. “Without a strong engineering background, the [software folks] don’t ‘get’ the hardware, and the [hardware folks] don’t ‘get’ the AI, and no one understands dynamic feedback systems. It’s going to get a whole lot worse.”

Above: Winn Schwartau

Image Credit: Winn Schwartau

Schwartau isn’t alone in his belief that the cybersecurity workforce gap is something of an artificial construct. Fredrick Lee has held senior security positions at several high-profile tech companies over the past decade, including Twilio, NetSuite, Square, and Gusto — and he also thinks the “skills shortage” is more of a “creativity problem” in hiring.

“To close the existing talent gap and attract more candidates to the field, we need to do more to uncover potential applicants from varied backgrounds and skill sets, instead of searching for nonexistent ‘unicorn’ candidates — people with slews of certifications, long tenures in the industry, and specialized skills in not one, but several, tech stacks and disciplines,” he said.

What Lee advocates is dropping what he calls the “secret handshake society mindset” that promotes a lack of diversity in the workforce by deterring potential new entrants.

Automation for the people

Schwartau is also a vocal critic of AI on numerous grounds, one being the lack of explainability. Algorithms may give different results on different occasions to resolve the same problem — without explaining why. “We need to have a mechanism to hold them accountable for their decisions, which also means we need to know how they make decisions,” he said.

While many companies deploy AI as an extra line of defense to help them spot threats and weaknesses, Schwartau fears that removing the checks and balances human beings provide could lead to serious problems down the line.

“Humans are lazy, and we like automation,” he said. “I worry about false positives in an automated response system that can falsely indict a person or another system. I worry about the ‘We have AI, let the AI handle it’ mindset from vendors and C-suiters who are far out of their element. I worry that we will have increasing faith in AI over time. I worry we will migrate to these systems and not design a graceful degradation fallback capability to where we are now.”

Beyond issues of blind faith, companies could also be swept up by the hype and hoodwinked into buying inferior AI products that don’t do what they claim to.

“My biggest fear about AI as a cybersecurity defense in the short term is that many companies will waste time by trying half-baked solutions using AI merely as a marketing buzzword, and when the products don’t deliver results, the companies will conclude that AI/ML itself as an approach doesn’t work for the problem, when in fact they just used a poor product,” Schwartau said. “Companies should focus on efficacy first rather looking for products that have certain buzzwords. After all, there are rules-based systems, in cybersecurity and other domains, that can outperform badly constructed AI systems.”

It’s worth looking at the role that rules-based automated tools — where AI isn’t part of the picture — play in plugging the cybersecurity skills gap. After all, the end goal is ultimately the same. Not enough humans to do the job? Here’s some technology that can fill the void.

Dublin-based Tines is one company that’s setting out to help enterprise security teams automate repetitive workflows.

For context, most big companies employ a team of security professionals to detect and respond to cyberattacks — typically aided by automated tools such as firewalls and antivirus software. However, these tools create a lot of false alarms and noise, so people need to be standing by to dig in more deeply. With Tines, security personnel can prebuild what the company calls “automation stories.” These can be configured to carry out a number of steps after an alert is triggered — doing things like threat intelligence searches or scanning for sensitive data in GitHub source code, such as passwords and API keys. The repository owner or on-call engineer can then be alerted automatically (e.g., through email or Slack).

In short, Tines saves a lot of repetitive manual labor, leaving security personnel to work on more important tasks — or go home at a reasonable hour. This is a key point, given that burnout can exacerbate the talent shortfall, either through illness or staff jumping ship.

Tines CEO and cofounder Eoin Hinchy told VentureBeat that “79% of security teams are overwhelmed by the volume of alerts they receive. [And] security teams are spending more and more time performing repetitive manual tasks.”

Tines founders Eoin Hinchy and

Above: Tines cofounders Eoin Hinchy (left) and Thomas Kinsella (right)

In terms of real-world efficacy, Tines claims that one of its Fortune 500 customers saves the equivalent of 70 security analyst hours a week through a single automation story that automates the collection and enrichment of antivirus alerts.

“This kind of time-saving is not unusual for Tines customers and is material when you consider that most Tines customers will have about a dozen automation stories providing similar time-savings,” Hinchy continued.

Tines also helps bolster companies’ cybersecurity capabilities by empowering non-coding members of the team. Anyone — including security analysts — can create their own automations (similar to IFTTT) through a drag-and-drop interface without relying on additional engineering resources. “We believe that users on the front line, with no development experience, should be able to automate any workflow,” Hinchy said.

Creating

Above: Tines is a code-free “drag-and-drop” platform for automating repetitive tasks

Hinchy also touched on a key issue that could make manually configured automation more appealing than AI in some cases: explainability. As Schwartau noted, a human worker can explain why they carried out a particular task the way they did, or arrived at a certain conclusion, but AI algorithms can’t. Rules-based automated tools, on the other hand, just do what their operator tells them to — there is no “black box” here.

“Our customers really care about transparency when implementing automation. They want to know exactly why Tines took a particular decision in order to develop trust in the platform,” Hinchy added. “The black box nature of AI and ML is not conducive to this.”

Other platforms that help alleviate cybersecurity teams’ workload include London-based Snyk, which last month raised $150 million at a $1 billion valuation for an AI platform that helps developers find and fix vulnerabilities in their open source code.

“With Snyk, security teams offer guidance, policies, and expertise, but the vast majority of work is done by the development teams themselves,” Snyk cofounder and president Guy Podjarny told VentureBeat. “This is a core part of how we see dev-first security: security teams modeling themselves after DevOps, becoming a center of excellence building tools and practices to help developers secure applications as they build it, at their pace. We believe this is the only way to truly scale security, address the security talent shortage, and improve the security state of your applications.”

The state of play

The importance of AI, ML, and automation in cybersecurity is clear — but it’s often less about plugging skills gaps than it is about enabling cybersecurity teams to provide real-time protection to billions of end users. With bad actors scaling their attacks through automation, companies need to adopt a similar approach.

But humans are a vital part of the cybersecurity process, and AI and other automated tools enable them to do their jobs better while focusing on more complex or pressing tasks that require experience, critical thinking, moral considerations, and judgment calls. Moreover, threats are constantly growing and evolving, which will require more people to manage and build the AI systems in the first place.

“The sheer number of cybersecurity threats out there far exceeds the current solution space,” BlackBerry’s Eagan said. “We will always need automation and more cybersecurity professionals creating that automation. Security is a cat and mouse game, and currently more money is spent in threat development than in protection and defense.”

Companies also need to be wary of “AI” as a marketing buzzword. Rather than choosing a poor product that either doesn’t do what it promises or is a bad fit for the job, they can turn to simple automated systems.

“For me, machines and automation will act as a mechanism to enhance the efficiency and effectiveness of teams,” Tines’ Hinchy said. “Machines and humans will work together, with machines doing more of the repetitive, routine tasks, freeing up valuable human resources to innovate and be creative.”

Statistical and anecdotal evidence tends to converge around the idea of the cybersecurity workforce gap, but there is general optimism that the situation will correct itself in time — through a continued shift toward a more “consolidated, services-based” cybersecurity approach, as Ghosemajumder put it, as well as by improving education for young people and upskilling and retraining existing workers.

“The workforce is getting larger in absolute terms,” Ghosemajumder said. “There is greater interest in cybersecurity and more people going into it, in the workforce, as well as in schools, than ever before. When I studied computer science, there were no mandatory security courses. When I studied management, there were no cybersecurity courses at all. Now, cybersecurity is one of the most popular subjects in computer science programs, and is taught in most leading business schools and law schools.”

Read More: VentureBeat's Special Issue on AI and Security


Author: Paul Sawers.
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!