AI & RoboticsNews

AI and cybersecurity: Friends, foes, collaborators

One time, a large enterprise Gary Harbison worked for was targeted by a hacker looking to execute a fraudulent payment. 

They were persistent, first sending an email, then following up with what they thought would be a sly voice call impersonating the CEO. But they hadn’t done their research: The company’s head exec was Scottish. The employee who answered immediately picked up on the scam. 

These days, though, generative AI is helping attackers create more believable, sophisticated deepfakes (and furthering their mission in many other ways, as well). 

“As you get into generative AI and the ability to replicate voice, it’s going to be very difficult for an employee to make that kind of decision on the fly,” Harbison, now CISO at Johnson and Johnson, said in a fireside chat at this week’s VentureBeat Transform 2023 event. 

AI is a fascinating technology unfolding before the world, but to cybersecurity it presents what Harbison described as the ultimate double-edged sword. 

On the one hand, it provides huge opportunities to change how industries work and innovate.

At the same time, there are risks to be cognizant of.

A third layer is that AI can be used to augment cybersecurity efforts (such as improving code reviews or to automate more logic-driven decisions in performing advanced detection). 

“Change is inevitable, and it’s always going to evolve quickly,” said Harbison. “We need to be very focused on enabling our business to take advantage of technology that lets us move faster and bring capabilities to market quicker, but do it in a safe and responsible way and make sure security is built in.”

AI can and will increase potential attacks because it will allow threat actors to become more efficient in how they automate and craft campaigns such as phishing emails. Traditionally, employees have been educated to look for grammatical errors or wonky wording.

“Well, with generative AI, these are going to be very well-written emails and they’re going to be harder to distinguish,” said Harbison.

Overall, cyber-defense teams will have to look at AI from an intelligence standpoint to help determine how the technology is improving attackers’ tactics and tools. 

Similarly, from an IoT perspective, security teams must understand devices’ purposes and capabilities. Traditional security controls can’t be deployed with some new devices because they may have limited compute power.

Society-wide, even, there will be many issues to work through with AI and cybersecurity, said Harbison. 

“Things like, how do you introduce evidence now into a court of law if we’re not able to tell whether the video or audio was replicated or manipulated in some way?”

>>Follow all our VentureBeat Transform 2023 coverage<<

First and foremost, it is critical to educate employees on the risks of AI and implement guardrails to ensure data protection, said Harbison. Models must be well trained with the right datasets that can’t be manipulated. Enterprises also don’t want to have models that may produce hallucinations that can disrupt business decisions.

Johnson and Johnson has a program to raise its employees’ “digital acumen,” he explained. This helps them to understand the benefits of AI and the potential enhancements it can drive, as well as security considerations and important governance procedures. 

The company is also working to build out private environments so that it can test and try to bring forward discoveries in a safe and responsible way and without uploading sensitive data to public AI tools. 

“And we really want to empower developers to have the right tools to build security upfront and along the way,” said Harbison.

AI has been around for a while, Harbison noted, but the explosion of the technology in just the last few months has made it an executive and boardroom topic for most enterprises. Yes, there is some resistance, but “change is inevitable, and it’s always going to evolve quickly.” 

It really comes down to a mindset shift and an ability to step back from some of the fear and assess the technology from multiple angles. 

“Our goal is not to be afraid of these technologies and shy away from them and tell our business not to use them,” he said. 

Rather, CISOs should learn about AI tools and ensure that “we’re safeguarding along the way and we’re considering any possible risks as we’re deploying them,” he said. 

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


One time, a large enterprise Gary Harbison worked for was targeted by a hacker looking to execute a fraudulent payment. 

They were persistent, first sending an email, then following up with what they thought would be a sly voice call impersonating the CEO. But they hadn’t done their research: The company’s head exec was Scottish. The employee who answered immediately picked up on the scam. 

These days, though, generative AI is helping attackers create more believable, sophisticated deepfakes (and furthering their mission in many other ways, as well). 

“As you get into generative AI and the ability to replicate voice, it’s going to be very difficult for an employee to make that kind of decision on the fly,” Harbison, now CISO at Johnson and Johnson, said in a fireside chat at this week’s VentureBeat Transform 2023 event. 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

AI is a fascinating technology unfolding before the world, but to cybersecurity it presents what Harbison described as the ultimate double-edged sword. 

On the one hand, it provides huge opportunities to change how industries work and innovate.

At the same time, there are risks to be cognizant of.

A third layer is that AI can be used to augment cybersecurity efforts (such as improving code reviews or to automate more logic-driven decisions in performing advanced detection). 

“Change is inevitable, and it’s always going to evolve quickly,” said Harbison. “We need to be very focused on enabling our business to take advantage of technology that lets us move faster and bring capabilities to market quicker, but do it in a safe and responsible way and make sure security is built in.”

Increasingly sophisticated attacks

AI can and will increase potential attacks because it will allow threat actors to become more efficient in how they automate and craft campaigns such as phishing emails. Traditionally, employees have been educated to look for grammatical errors or wonky wording.

“Well, with generative AI, these are going to be very well-written emails and they’re going to be harder to distinguish,” said Harbison.

Overall, cyber-defense teams will have to look at AI from an intelligence standpoint to help determine how the technology is improving attackers’ tactics and tools. 

Similarly, from an IoT perspective, security teams must understand devices’ purposes and capabilities. Traditional security controls can’t be deployed with some new devices because they may have limited compute power.

Society-wide, even, there will be many issues to work through with AI and cybersecurity, said Harbison. 

“Things like, how do you introduce evidence now into a court of law if we’re not able to tell whether the video or audio was replicated or manipulated in some way?”

>>Follow all our VentureBeat Transform 2023 coverage<<

What about the security of AI itself?

First and foremost, it is critical to educate employees on the risks of AI and implement guardrails to ensure data protection, said Harbison. Models must be well trained with the right datasets that can’t be manipulated. Enterprises also don’t want to have models that may produce hallucinations that can disrupt business decisions.

Johnson and Johnson has a program to raise its employees’ “digital acumen,” he explained. This helps them to understand the benefits of AI and the potential enhancements it can drive, as well as security considerations and important governance procedures. 

The company is also working to build out private environments so that it can test and try to bring forward discoveries in a safe and responsible way and without uploading sensitive data to public AI tools. 

“And we really want to empower developers to have the right tools to build security upfront and along the way,” said Harbison.

Safeguarding, not shying away

AI has been around for a while, Harbison noted, but the explosion of the technology in just the last few months has made it an executive and boardroom topic for most enterprises. Yes, there is some resistance, but “change is inevitable, and it’s always going to evolve quickly.” 

It really comes down to a mindset shift and an ability to step back from some of the fear and assess the technology from multiple angles. 

“Our goal is not to be afraid of these technologies and shy away from them and tell our business not to use them,” he said. 

Rather, CISOs should learn about AI tools and ensure that “we’re safeguarding along the way and we’re considering any possible risks as we’re deploying them,” he said. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!