AI & RoboticsNews

Adversarial AI and the dystopian future of tech

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022


AI is a rapidly growing technology that has many benefits for society. However, as with all new technologies, misuse is a potential risk. One of the most troubling potential misuses of AI can be found in the form of adversarial AI attacks.

In an adversarial AI attack, AI is used to manipulate or deceive another AI system maliciously. Most AI programs learn, adapt and evolve through behavioral learning. This leaves them vulnerable to exploitation because it creates space for anyone to teach an AI algorithm malicious actions, ultimately leading to adversarial results. Cybercriminals and threat actors can exploit this vulnerability for malicious purposes and intent.

Although most adversarial attacks have so far been performed by researchers and within labs, they are a growing matter of concern. The occurrence of an adversarial attack on AI or a machine learning algorithm highlights a deep crack in the AI mechanism. The presence of such vulnerabilities within AI systems can stunt AI growth and development and become a significant security risk for people using AI-integrated systems. Therefore, to fully utilize the potential of AI systems and algorithms, it is crucial to understand and mitigate adversarial AI attacks.

Understanding adversarial AI attacks

Although the modern world we live in now is deeply layered with AI, it has yet to take over the world fully. Since its advent, AI has been met with ethical criticisms, which has sparked a common hesitation in fully adopting it. However, the growing concern that the vulnerabilities in machine learning models and AI algorithms can become a part of malicious purposes is a massive hindrance in AI/ML growth.

The basic parallels of an adversarial attack are fundamentally the same: manipulating an AI algorithm or an ML model to produce malicious results. However, an adversarial attack typically entails the two following things:

  • Poisoning: the ML model is fed with inaccurate or misinterpreted data to dupe it into making an erroneous prediction
  • Contaminating: the ML model is fed with maliciously designed data to deceive an already trained model into conducting malicious actions and predictions.

In both methods, contamination is most likely to become a widespread problem. Since the technique involves a malicious actor injecting or feeding negative information, these actions can quickly become a widespread problem with the help of other attacks. In contrast, it seems easy to control and prevent poisoning since providing a training dataset would necessitate an insider job. It is possible to prevent such insider threats with a zero-trust security model and other network security protocols.

However, protecting a business against adversarial threats will be a hard task. While typical online security issues are easy to mitigate using various tools such as residential proxies, VPNs, or even antimalware software, adversarial AI threats might overcome these vulnerabilities, rendering these tools too primitive to enable security.

How is adversarial AI a threat?

AI is already a well-integrated, key part of critical fields such as finance, healthcare and transportation. Security issues in these fields can be particularly hazardous to all human lives. Since AI is well integrated within human lives, the impact of adversarial threats in AI can wreak massive havoc.

In 2018, an Office of the Director of National Security report highlighted several Adversarial Machine learning threats. Amidst the threats listed in the report, one of the most pressing concerns was the potential that these attacks had in compromising computer vision algorithms.

Research has so far come across several examples of AI positioning. One such study involved researchers adding small changes or “perturbations” to an image of a panda, invisible to the naked eye. The changes caused the ML algorithm to identify the image of the panda as that of a gibbon.

Similarly, another study highlights the possibility of AI contamination which involved attackers duping the facial recognition cameras with infrared light. This action allowed these attacks to mitigate accurate recognition and will enable them to impersonate other people.

Moreover, adversarial attacks are also evident in email spam filter manipulation. Since email spam filter tools successfully filter spam emails by tracking certain words, attackers can manipulate these tools by using acceptable words and phrases, gaining access to the recipient’s inbox. Therefore, while considering these examples and researches, it is easy to identify the impact of adversarial AI attacks on the cyber threat landscape, such as:

  • Adversarial AI opens the possibility of rendering AI-based security tools such as phishing filters useless.
  • IoT devices are AI-based. Adversarial attacks on them could lead to large-scale hacking attempts.
  • AI tools tend to collect personal information. Attacks can manipulate these tools to reveal collected personal information.
  • AI is a part of the defense system. Adversarial attacks on defense tools can put national security in danger.
  • It can bring about a new variety of attacks that remain undetected.

It is ever more crucial to maintain security and vigilance against adversarial AI attacks.

Is there any prevention?

Considering the potential AI development has in making human lives more manageable and much more sophisticated, researchers are already devising various ways for protecting systems against adversarial AI. One such method is adversarial training, which involves pre-training the machine learning algorithm against positioning and contamination attempts by feeding it with possible perturbations.

In the case of computer vision algorithms, the algorithms will come pre-disposed with images and their altercations. For example, a car visual algorithm designed to identify the stop sign will have learned all the possible alterations of the stop sign, such as with stickers, graffiti, or even missing letters. The algorithm will correctly identify the phenomena despite the attacker’s manipulations. However, this method is not foolproof since it is impossible to identify all possible adversarial attack iterations.

The algorithm employs non-intrusive image quality features to distinguish between legitimate and adversarial inputs. The technique can potentially ensure that adversarial machine learning importer and alternation are neutralized before reaching the classification information. Another such method includes pre-processing and denoising, which automatically removes possible adversarial noise from the input.

Conclusion 

Despite its prevalent use in the modern world, AI has yet to take over. Although machine learning and AI have managed to expand and even dominate some areas of our daily lives, they remain significantly under development. Until researchers can fully recognize the potential of AI and machine learning, there will remain a gaping hole in how to mitigate adversarial threats within AI technology. However, research on the matter is still ongoing, primarily because it is critical to AI development and adoption.

Waqas is a cybersecurity journalist and writer.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Waqas
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!