AI & RoboticsNews

How to protect AI from cyberattacks – start with the data

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Artificial intelligence is certainly a game-changer when it comes to security. Not only does it greatly expand the capability to manage and monitor systems and data, it adds a level of dynamism to both protection and recovery that significantly increases the difficulty, and lessens the rewards, of mounting a successful attack.

But AI is still a digital technology, which means it can be compromised as well, particularly when confronted by an intelligent attack. As the world becomes more dependent on systems intelligence and autonomy for everything from business processes to transportation to healthcare, the consequences of a security breach rise even as the likelihood declines.

>> Special Report: Intelligent Security <<

For this reason, the enterprise should take a hard look at their AI deployments to date, as well as their ongoing strategies, to see where vulnerabilities live and what can be done to eliminate them.

According to Robotics Biz, the most prevalent type of attack on AI systems to date has been the infiltration of high-volume algorithms in order to manipulate their predictive output. In most cases, this involves introducing false or malicious inputs (aka, data) into the system to give it a skewed, if not entirely incorrect, view of reality.

Any AI that is connected to the internet can be compromised in this way, often over a period of time so that the effects are gradual and the damage is long-lasting. The best counter is to streamline both the AI algorithm and the process in which data is ingested, as well as to maintain strict control over data conditioning to spot faulty or malicious data before it enters the chain.

Getting to AI through its data sources

The need for vast amounts of data is actually one of AI’s biggest weaknesses because it creates a situation in which security can be breached without attacking the AI itself. A recent series of papers by CSET, the Center for Security and Emerging Technology, highlighted the growing number of ways white hat hackers have shown AI can be compromised by targeting its data sources. 

This can be used to misdirect autonomous cars into oncoming traffic or to speed up to dangerous levels, which means it can also cause business processes to suddenly go haywire. Unlike traditional cyberattacks, however, the aim is usually not to destroy the AI or take down systems but to take control of it to benefit the attacker, such as to divert data or funds or just to cause trouble.

Image-based training data is among the most vulnerable, says Stanford University cryptography professor Dan Boneh. Typically, a hacker will use the fast gradient sign method (FGSM), which makes pixel-level changes in training images, undetectable by the human eye, that sow confusion into training models. These “adversarial examples” are very difficult to detect, but can nonetheless alter the results of algorithms in a wide variety of ways, even if the attacker only has access to the inputs, the training data and the outputs. And as AI algorithms become increasingly dependent upon open source tools, hackers will have greater access to algorithms as well.

How to protect your AI

What can the enterprise do to protect itself? According to Great Learning’s Akriti Galav and SEO consultant Saket Gupta, the three key steps to take right now are:

  • Maintain the strictest possible security protocols across the entire data environment.
  • Make sure all records from all operations performed by AI are logged and placed into an audit trail.
  • Implement strong access control and authentication.

As well, organizations should pursue longer-term strategic goals, such as developing a data protection policy specifically for AI training, educate the workforce about the risks to AI and how to spot faulty outcomes, and maintain an ongoing risk assessment mechanism that is both dynamic and forward-looking.

No digital system can be 100% secure, regardless of how intelligent it is. The dangers inherent in compromised AI are more subtle but no less consequential than in traditional platforms, so the enterprise needs to update its security policies to reflect this new reality now rather than wait until the damage is done.

And just like with legacy technology, securing AI is a two-pronged effort: to reduce the means and opportunity of attack, and to minimize damage and restore credibility as quickly as possible when the inevitable does happen.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Arthur Cole
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!