Check out all the on-demand sessions from the Intelligent Security Summit here.
OpenAI CTO Mira Murati made the company’s stance on AI regulation crystal clear in a TIME article published over the weekend: Yes, ChatGPT and other generative AI tools should be regulated.
“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” she said in the interview. “But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies — definitely regulators and governments and everyone else.”
>>Follow VentureBeat’s ongoing ChatGPT coverage<<
And when asked whether it was too early for policymakers and regulators to get involved, over fears that government involvement could slow innovation, she said, “It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
AI regulations — and AI audits — are coming
In a way, Murati’s opinion matters little: AI regulation is coming, and quickly, according to Andrew Burt, managing partner of BNH AI, a boutique law firm founded in 2020 that’s made up of lawyers and data scientists and focuses squarely on AI and analytics.
And those laws will often require AI audits, he said, so companies need to get ready now.
“We didn’t anticipate that there would [already] be these new AI laws on the books that say if you’re using an AI system in this area, or if you’re just using AI in general, you need audits,” he told VentureBeat. Many of these AI regulations and auditing requirements coming on the books in the U.S., he explained, are mostly at the state and municipal level and vary wildly — including New York City’s Automated Employment Decision Tool (AEDT) law and a similar New Jersey bill in the works.
Audits are a necessary requirement in a fast-evolving field like AI, Burt explained.
“AI is moving so fast, regulators don’t have a fully nuanced understanding of the technologies,” he said. “They’re trying not to stifle innovation, so if you’re a regulator, what can you actually do? The best answer that regulators are coming up with is to have some independent party look at your system, assess it for risks, and then you manage those risks and document how you did all of that.”
How to prepare for AI audits
The bottom line is, you don’t need to be like a soothsayer to know that audits are going to be a central component of AI regulation and risk management. The question is, how can organizations get ready?
The answer, said Burt, is getting easier and easier. “I think the best answer is to first have a program for AI risk management. You need some program to systematically, and in a standardized fashion, manage AI risk across your enterprise.”
Number two, he emphasized, is organizations should adopt the new NIST AI risk management framework (RMF) that was released last week.
“It’s very easy to create a risk management framework and align it to the NIST AI risk management framework within an enterprise,” he said. “It’s flexible, so I think it’s easy to implement and operationalize.”
Four core functions to prepare for AI audits
The NIST AI RMF has four core functions, he explained: First is map, or assess what risks the AI could create. Then, measure — quantitatively or qualitatively — so you have a program to actually test. Once you’re done testing, manage — that is, reduce or otherwise document and justify the risks that are appropriate for the system. Finally, govern — make sure you have policies and procedures in place that apply not just to one specific system.
“You’re not doing this on an ad hoc basis, but you’re doing this across the board on an enterprise level,” Burt pointed out. “You can create a very flexible AI risk management program around this. A small organization can do it and we’ve helped a Fortune 500 company do it.
So the RMF is easy to operationalize, he continued, but added he did not want people mistaking its flexibility for something too generic to actually be implemented.
“It’s intended to be useful,” he said. “We’ve already started to see that. We have clients come to us saying, ‘this is the standard that we want to implement.’”
It’s time for companies to get their AI audit act together
Even though the laws aren’t “fully baked,” Burt said it’s not going to be a surprise. So it’s time to get your AI auditing act together if you’re an organization investing in AI.
The easiest answer is aligning to the NIST AI RMF, he said, because — unlike in cybersecurity, which has standardized playbooks — for big enterprise organizations, the way AI is trained and deployed is not standardized, so the way it is assessed and documented isn’t either.
“Everything is subjective, but you don’t want that to create liability because it creates additional risks,” he said. “What we tell clients is the best and easiest place to start is model documentation — create a standard documentation template and make sure that every AI system is being documented in accordance with that standard. As you build that out, you start to get what I’ll just call a report for every model that can provide the foundation for all of these audits.”
Care about AI? Invest in managing its risks
According to Burt, organizations won’t get the most value out of AI if they are not thinking about its risks.
“You can deploy an AI system and get value out of it today, but in the future something is going to come back and bite you,” he said. “So I would say if you care about AI, invest in managing its risks. Period.”
To get the most ROI from your AI efforts, he continued, companies need to make sure they are not violating privacy, creating security vulnerabilities or perpetuating bias, which could open them up to lawsuits, regulatory fines and reputational damage.
“Auditing, to me, is just a fancy word for some independent party looking at the system and understanding how you assess it for risks and how you manage those risks,” he said. “And if you didn’t do either of those things, the audit is going to be pretty clear. It’s going to be pretty negative.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat