AI & RoboticsNews

How Arnica’s CEO foresees generative AI’s impact on DevOps security

VentureBeat recently sat down (virtually) with Nir Valtman, CEO and co-founder of Arnica. Valtman’s extensive cybersecurity experience includes leading product and data security across Finastra, establishing and hardening security practices and posture management at Kabbage (acquired by Amex) as CISO, and heading application security across NCR. He is also serving on the advisory board of Salt Security.  

Valtman’s reputation for being one of the most prolific innovators in the industry is also reflected in his many contributions to open-source projects and the invention of seven patents in software security. He is also a frequent speaker at the industry’s leading cybersecurity events, including Blackhat, DEF CON, BSides and RSA.

Under Valtman’s leadership, Arnica is helping define the next generation of developer-focused application security tools, techniques and technologies.

The following is an excerpt from VentureBeat’s interview with Nir Valtman:

VentureBeat: How do you envision the role of generative AI in cybersecurity evolving over the next 3-5 years?

Nir Valtman: I think we are starting to get a better understanding of where gen AI fits and where it ends up actually being a longer route to take. Gen AI can bring tremendous value in application security by arming developers with the tools to be secure by default – or, at minimum, help less experienced developers to achieve this goal.  

VB: What emerging technologies or methodologies are you monitoring that may impact how generative AI is used for security?

Valtman: One of the emerging needs that I see in the market is providing developers with actionable remediation paths for security vulnerabilities. It starts with prioritizing which assets within an organization are important, then with finding the right remediation owners, and finally with actually mitigating the risk for them. Gen AI is going to be a valuable tool for risk remediation, but prioritizing what is important to a team or company, and identifying who owns the necessary action, may need to be more deterministic.

VB: Where should organizations prioritize investments to maximize the potential of generative AI in cybersecurity?

Valtman: Organizations should prioritize investing in solving repetitive and complex problems, such as mitigating specific categories of source code vulnerabilities As gen AI proves itself with additional use cases, this prioritization will change over time. 

VB: How can generative AI shift the security approach from reactive to proactive?

Valtman: For gen AI to be truly predictive, it needs to train on highly relevant data sets. The more predictive and accurate a model is, the more confidence technology leaders will have in the AI-driven decisions being made. The trust loop will take some time to build momentum, especially in a high-stakes arena like security. But once the models become more battle-tested, gen AI-based security tools will be able to proactively mitigate risks with little or no human involvement. In the meanwhile, proactive security measures can be taken with a more thorough review by the right humans at the right time, as hinted in the prioritization and ownership topic above.  

VB: What changes need to be made at the organizational level to incorporate generative AI for security?

Valtman: Changes need to be made on the strategic and tactical levels. From a strategic standpoint, decision-makers need to be educated about the benefits and risks of utilizing this technology, as well as decide how the use of AI aligns with the security goals of the company. On the tactical front, budget and resources need to be allocated to handle their AI program, such as integrating with asset, application and data discovery tools, as well as developing a playbook for driving corrective actions from findings or security incidents.

VB: What security challenges could generative AI create if implemented across an organization? How would you combat these challenges?

Valtman: Data privacy and leakage present the highest risk. These can be mitigated by hosting models internally, anonymization of data before sending it to external services, and regular audits to ensure compliance with internal and regulatory requirements.

An additional high-risk area is the impact on the security or integrity of the models, such as model poisoning or exploitation of the models to gain access to more data than needed. The mitigation isn’t trivial, as it requires vulnerability assessment and sophisticated penetration testing to identify these risks. Even if these risks are identified for the specific implementation the company uses, finding solutions that won’t impact functionality may not be trivial as well.

VB: How could generative AI automate threat detection, security patches, and other processes?

Valtman: By observing historical behavior within networks, logs, email content, code, transactions, and other data sources, generative AI can identify threats, such as malware detonation, insider threats, account takeovers, phishing, payment fraud, and more. This is a natural fit.

Other use cases that may take longer to evolve would be threat modeling at the design phase of software development, automated patch deployment with minimal risk (requires having good enough test coverage for the developed software), and potentially self-improving automated incident response playbook execution.

VB: What plans or strategies should organizations implement regarding generative AI and data protection?

Valtman: Policies need to be established around data collection, storage, usage, and sharing, as well as ensuring that roles and responsibilities are clearly defined. These policies need to be aligned with the overall cybersecurity strategy, which includes supporting functions for data protection, such as incident response and breach notification plans, vendor and third party risk management, security awareness, and more.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


VentureBeat recently sat down (virtually) with Nir Valtman, CEO and co-founder of Arnica. Valtman’s extensive cybersecurity experience includes leading product and data security across Finastra, establishing and hardening security practices and posture management at Kabbage (acquired by Amex) as CISO, and heading application security across NCR. He is also serving on the advisory board of Salt Security.  

Valtman’s reputation for being one of the most prolific innovators in the industry is also reflected in his many contributions to open-source projects and the invention of seven patents in software security. He is also a frequent speaker at the industry’s leading cybersecurity events, including Blackhat, DEF CON, BSides and RSA.

Under Valtman’s leadership, Arnica is helping define the next generation of developer-focused application security tools, techniques and technologies.

The following is an excerpt from VentureBeat’s interview with Nir Valtman:

VentureBeat: How do you envision the role of generative AI in cybersecurity evolving over the next 3-5 years?

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

Nir Valtman: I think we are starting to get a better understanding of where gen AI fits and where it ends up actually being a longer route to take. Gen AI can bring tremendous value in application security by arming developers with the tools to be secure by default – or, at minimum, help less experienced developers to achieve this goal.  

VB: What emerging technologies or methodologies are you monitoring that may impact how generative AI is used for security?

Valtman: One of the emerging needs that I see in the market is providing developers with actionable remediation paths for security vulnerabilities. It starts with prioritizing which assets within an organization are important, then with finding the right remediation owners, and finally with actually mitigating the risk for them. Gen AI is going to be a valuable tool for risk remediation, but prioritizing what is important to a team or company, and identifying who owns the necessary action, may need to be more deterministic.

VB: Where should organizations prioritize investments to maximize the potential of generative AI in cybersecurity?

Valtman: Organizations should prioritize investing in solving repetitive and complex problems, such as mitigating specific categories of source code vulnerabilities As gen AI proves itself with additional use cases, this prioritization will change over time. 

VB: How can generative AI shift the security approach from reactive to proactive?

Valtman: For gen AI to be truly predictive, it needs to train on highly relevant data sets. The more predictive and accurate a model is, the more confidence technology leaders will have in the AI-driven decisions being made. The trust loop will take some time to build momentum, especially in a high-stakes arena like security. But once the models become more battle-tested, gen AI-based security tools will be able to proactively mitigate risks with little or no human involvement. In the meanwhile, proactive security measures can be taken with a more thorough review by the right humans at the right time, as hinted in the prioritization and ownership topic above.  

VB: What changes need to be made at the organizational level to incorporate generative AI for security?

Valtman: Changes need to be made on the strategic and tactical levels. From a strategic standpoint, decision-makers need to be educated about the benefits and risks of utilizing this technology, as well as decide how the use of AI aligns with the security goals of the company. On the tactical front, budget and resources need to be allocated to handle their AI program, such as integrating with asset, application and data discovery tools, as well as developing a playbook for driving corrective actions from findings or security incidents.

VB: What security challenges could generative AI create if implemented across an organization? How would you combat these challenges?

Valtman: Data privacy and leakage present the highest risk. These can be mitigated by hosting models internally, anonymization of data before sending it to external services, and regular audits to ensure compliance with internal and regulatory requirements.

An additional high-risk area is the impact on the security or integrity of the models, such as model poisoning or exploitation of the models to gain access to more data than needed. The mitigation isn’t trivial, as it requires vulnerability assessment and sophisticated penetration testing to identify these risks. Even if these risks are identified for the specific implementation the company uses, finding solutions that won’t impact functionality may not be trivial as well.

VB: How could generative AI automate threat detection, security patches, and other processes?

Valtman: By observing historical behavior within networks, logs, email content, code, transactions, and other data sources, generative AI can identify threats, such as malware detonation, insider threats, account takeovers, phishing, payment fraud, and more. This is a natural fit.

Other use cases that may take longer to evolve would be threat modeling at the design phase of software development, automated patch deployment with minimal risk (requires having good enough test coverage for the developed software), and potentially self-improving automated incident response playbook execution.

VB: What plans or strategies should organizations implement regarding generative AI and data protection?

Valtman: Policies need to be established around data collection, storage, usage, and sharing, as well as ensuring that roles and responsibilities are clearly defined. These policies need to be aligned with the overall cybersecurity strategy, which includes supporting functions for data protection, such as incident response and breach notification plans, vendor and third party risk management, security awareness, and more.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Louis Columbus
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!