AI & RoboticsNews

How the NIST is moving ‘trustworthy AI’ forward with its AI risk management framework

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Is your AI trustworthy or not? As the adoption of AI solutions increases across the board, consumers and regulators alike expect greater transparency over how these systems work. 

Today’s organizations not only need to be able to identify how AI systems process data and make decisions to ensure they are ethical and bias-free, but they also need to measure the level of risk posed by these solutions. The problem is that there is no universal standard for creating trustworthy or ethical AI

However, last week the National Institute of Standards and Technology (NIST) released an expanded draft for its AI risk management framework (RMF) which aims to “address risks in the design, development, use, and evaluation of AI products, services, and systems.” 

The second draft builds on its initial March 2022 version of the RMF and a December 2021 concept paper. Comments on the draft are due by September 29. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

The RMF defines trustworthy AI as being “valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.”

NIST’s move toward ‘trustworthy AI’ 

The new voluntary NIST framework provides organizations with parameters they can use to assess the trustworthiness of the AI solutions they use daily. 

The importance of this can’t be understated, particularly when regulations like the EU’s General Data Protection Regulation (GDPR) give data subjects the right to inquire why an organization made a particular decision. Failure to do so could result in a hefty fine. 

While the RMF doesn’t mandate best practices for managing the risks of AI, it does begin to codify how an organization can begin to measure the risk of AI deployment. 

The AI risk management framework provides a blueprint for conducting this risk assessment, said Rick Holland, CISO at digital risk protection provider, Digital Shadows.

“Security leaders can also leverage the six characteristics of trustworthy AI to evaluate purchases and build them into Request for Proposal (RFP) templates,” Holland said, adding that the model could “help defenders better understand what has historically been a ‘black box‘ approach.” 

Holland notes that Appendix B of the NIST framework, which is titled, “How AI Risks Differ from Traditional Software Risks,” provides risk management professionals with actionable advice on how to conduct these AI risk assessments. 

The RMF’s limitations 

While the risk management framework is a welcome addition to support the enterprise’s internal controls, there is a long way to go before the concept of risk in AI is universally understood. 

“This AI risk framework is useful, but it’s only a scratch on the surface of truly managing the AI data project,” said Chuck Everette, director of cybersecurity advocacy at Deep Instinct. “The recommendations in here are that of a very basic framework that any experienced data scientist, engineers and architects would already be familiar with. It is a good baseline for those just getting into AI model building and data collection.”

In this sense, organizations that use the framework should have realistic expectations about what the framework can and cannot achieve. At its core, it is a tool to identify what AI systems are being deployed, how they work, and the level of risk they present (i.e., whether they’re trustworthy or not). 

“The guidelines (and playbook) in the NIST RMF will help CISOs determine what they should look for, and what they should question, about vendor solutions that rely on AI,” said Sohrob Jazerounian, AI research lead at cybersecurity provider, Vectra.

The drafted RMF includes guidance on suggested actions, references and documentation which will enable stakeholders to fulfill the ‘map’ and ‘govern’ functions of the AI RMF. The finalized version will include information about the remaining two RMF functions — ‘measure’ and ‘manage’ — will be released in January 2023.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Tim Keary
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!