AI & RoboticsNews

Orca Security deploys ChatGPT to secure the cloud with AI

Check out all the on-demand sessions from the Intelligent Security Summit here.


Securing the cloud is no easy feat. However, through the use of AI and automation, with tools like ChatGPT security teams can work toward streamlining day-to-day processes to respond to cyber incidents more efficiently. 

One provider exemplifying this approach is Israel-based cloud cybersecurity vendor Orca Security, which is currently valued at $1.8 billion. Today Orca announced it would be the first cloud security company to implement a ChatGPT extension. The integration will process security alerts and provide users with step-by-step remediation instructions. 

More broadly, this integration illustrates how ChatGPT can help organizations simplify their security operations workflows, so they can process alerts and events much faster. 

Using ChatGPT to streamline AI-driven remediation  

For years, security teams have struggled with managing alerts. In fact, research shows that 70% of security professionals report their home lives are being emotionally impacted by their work managing IT threat alerts. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

At the same time, 55% admit they aren’t confident in their ability to prioritize and respond to alerts. 

Part of the reason for this lack of confidence is that an analyst has to investigate whether each alert is a false positive or a legitimate threat, and if it is malicious, respond in the shortest time possible.

This is particularly challenging in complex cloud and hybrid working environments with lots of disparate solutions. It’s a time-consuming process with little margin for error. That’s why Orca Security is looking to use ChatGPT (which is based on GPT-3) to help users automate the alert management process. 

“We leveraged GPT-3 to enhance our platform’s ability to generate contextual actionable remediation steps for Orca security alerts. This integration greatly simplifies and speeds up our customers’ mean time to resolution (MTTR), increasing their ability to deliver fast remediations and continuously keep their cloud environments secure,” said Itamar Golan, head of data science at Orca Security.

Essentially, Orca Security uses a custom pipeline to forward security alerts to ChatGPT3, which will process the information, noting the assets, attack vectors and potential impact of the breach, and provide, directly into project tracking tools like Jira, a detailed explanation of how to remediate the issue. 

Users also have the option to remediate through the command line, infrastructure as code (Terraform and Pulumi) or the Cloud Console. 

It’s an approach that’s designed to help security teams make better use of their existing resources. “Especially considering most security teams are constrained by limited resources, this can greatly alleviate the daily workloads of security practitioners and devops teams,” Golan said.

Is ChatGPT a net positive for cybersecurity? 

While Orca Security’s use of ChatGPT highlights the positive role that AI can play in enhancing enterprise security, other organizations are less optimistic about the effect that such solutions will have on the threat landscape. 

For instance, Deep Instinct released threat intelligence research this week examining the risks of ChatGPT and concluded that “AI is better at creating malware than providing ways to detect it.” In other words, it’s easier for threat actors to generate malicious code than for security teams to detect it. 

“Essentially, attacking is always easier than defending (the best defense is attacking), especially in this case, since ChatGPT allows you to bring back life to old forgotten code languages, alter or debug the attack flow in no time and generate the whole process of the same attack in different variations (time is a key factor),” said Alex Kozodoy, cyber research manager at Deep Instinct. 

“On the other hand, it is very difficult to defend when you don’t know what to expect, which causes defenders to be able to be prepared for a limited set of attacks and for certain tools that can help them to investigate what has happened — usually after they’ve already been breached,” Kozodoy said. 

The good news is that as more organizations begin to experiment with ChatGPT to secure on-premise and cloud infrastructure, defensive AI processes will become more advanced, and have a better chance of keeping up with an ever-increasing number of AI-driven threats. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Tim Keary
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!