AI & RoboticsNews

When is ART useful? When it’s IBM’s Adversarial Robustness Toolbox for AI

IBM is hoping to advance the state of the art for artificial intelligence (AI) security with an open source project called the Adversarial Robustness Toolbox (ART).

Today, ART is being made available on Hugging Face as a set of tools that will help AI users and data scientists reduce potential security risks. While ART on HuggingFace is new, the overall effort is not. ART was started back in 2018 and was contributed to the Linux Foundation in 2020 as an open-source effort. IBM has been developing ART over the last several years as part of a DARPA effort known as Guaranteeing AI Robustness Against Deception (GARD).

As AI usage is growing rapidly, there is increasing emphasis on the growing threat of AI attacks. Common issues involve training data poisoning and evasion threats that confuse AI models by inserting malicious data or manipulating objects the system infers.

By releasing ART on Hugging Face the goal is to now make the defensive AI security tools available to more AI developers to help mitigate threats. Organizations that use AI models from Hugging Face can now more easily secure their models with evasion and poisoning threat examples and integrate defenses into their workflows.

“Hugging Face hosts a pretty big set of popular state-of-the-art models,” Nathalie Baracaldo Angel, manager of AI Security and Privacy Solutions at IBM told VentureBeat. “This integration allows the community to use the red-blue team tools that are part of ART for Hugging Face models.”

While there is now a significant amount of broad interest in AI today, IBM’s efforts to help secure AI predate the current generative AI era.

As an open-source effort, Angel noted that ART is already part of the Linux Foundation’s LF AI & Data project. She added that as part of that effort, it receives a wide range of contributions from multiple people and organizations. Additionally, as part of the DARPA GARD project, she said that DARPA has provided funding to IBM to maintain and extend ART’s capabilities.

With today’s news, she emphasized that there are no changes to ART in the Linux Foundation, however, ART now supports Hugging Face models. Hugging Face has become very popular over the past year as a location where organizations and individuals share and collaborate on AI models. IBM has multiple collaborations with Hugging Face, including one involving a geospatial AI model jointly developed with NASA.

The concept of adversarial robustness is critical to improving security.

Angel explained that adversarial robustness is all about acknowledging that an adversary may attempt to trick the machine learning pipeline to their advantage and then act to defend the pipeline. 

“This field requires an understanding of what the adversary can do to compromise the machine learning pipeline – a red team approach,  and subsequently selecting defenses to mitigate relevant risks,” she said.

Since its creation back in 2018, the risks that face AI have changed and ART has changed along with them. Angel said that ART has added a variety of attacks and defenses for multiple modalities, as well as support for object detection, object tracking, audio, and several types of models. 

“Most recently, we have been working on adding multi-modal modals such as CLIP, which will be added soon to the system,” she said. ” As with everything in the security field, there is a need to keep adding new tools as attacks and defenses keep evolving.”

IBM is hoping to advance the state of the art for artificial intelligence (AI) security with an open source project called the Adversarial Robustness Toolbox (ART).

Today, ART is being made available on Hugging Face as a set of tools that will help AI users and data scientists reduce potential security risks. While ART on HuggingFace is new, the overall effort is not. ART was started back in 2018 and was contributed to the Linux Foundation in 2020 as an open-source effort. IBM has been developing ART over the last several years as part of a DARPA effort known as Guaranteeing AI Robustness Against Deception (GARD).

As AI usage is growing rapidly, there is increasing emphasis on the growing threat of AI attacks. Common issues involve training data poisoning and evasion threats that confuse AI models by inserting malicious data or manipulating objects the system infers.

By releasing ART on Hugging Face the goal is to now make the defensive AI security tools available to more AI developers to help mitigate threats. Organizations that use AI models from Hugging Face can now more easily secure their models with evasion and poisoning threat examples and integrate defenses into their workflows.

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

“Hugging Face hosts a pretty big set of popular state-of-the-art models,” Nathalie Baracaldo Angel, manager of AI Security and Privacy Solutions at IBM told VentureBeat. “This integration allows the community to use the red-blue team tools that are part of ART for Hugging Face models.”

From DARPA to Hugging Face, the journey of ART

While there is now a significant amount of broad interest in AI today, IBM’s efforts to help secure AI predate the current generative AI era.

As an open-source effort, Angel noted that ART is already part of the Linux Foundation’s LF AI & Data project. She added that as part of that effort, it receives a wide range of contributions from multiple people and organizations. Additionally, as part of the DARPA GARD project, she said that DARPA has provided funding to IBM to maintain and extend ART’s capabilities.

With today’s news, she emphasized that there are no changes to ART in the Linux Foundation, however, ART now supports Hugging Face models. Hugging Face has become very popular over the past year as a location where organizations and individuals share and collaborate on AI models. IBM has multiple collaborations with Hugging Face, including one involving a geospatial AI model jointly developed with NASA.

What is adversarial robustness about and why does it matter for AI?

The concept of adversarial robustness is critical to improving security.

Angel explained that adversarial robustness is all about acknowledging that an adversary may attempt to trick the machine learning pipeline to their advantage and then act to defend the pipeline. 

“This field requires an understanding of what the adversary can do to compromise the machine learning pipeline – a red team approach,  and subsequently selecting defenses to mitigate relevant risks,” she said.

Since its creation back in 2018, the risks that face AI have changed and ART has changed along with them. Angel said that ART has added a variety of attacks and defenses for multiple modalities, as well as support for object detection, object tracking, audio, and several types of models. 

“Most recently, we have been working on adding multi-modal modals such as CLIP, which will be added soon to the system,” she said. ” As with everything in the security field, there is a need to keep adding new tools as attacks and defenses keep evolving.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!