AI & RoboticsNews

Nvidia helps enterprises guide and control AI responses with NeMo Guardrails

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


A primary challenge for generative AI and large language models (LLMs) overall is the risk that a user can get an inappropriate or inaccurate response.

The need to safeguard organizations and their users is understood well by Nvidia, which today released the new NeMo Guardrails open-source framework to help solve the challenge. The NeMo Guardrails project provides a way that organizations building and deploying LLMs for different use cases, including chatbots, can make sure responses stay on track. The guardrails provide a set of controls defined with new policy language to help define and enforce limits to ensure AI responses are topical, safe and do not introduce any security risks.

>>Follow VentureBeat’s ongoing generative AI coverage<<

“We think that every enterprise will be able to take advantage of generative AI to support their businesses,” Jonathan Cohen, vice president of applied research at Nvidia, said during a press and analyst briefing. “But in order to use these models in production, it’s important that they’re deployed in a way that is safe and secure.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Why guardrails matter for LLMs

Cohen explained that a guardrail is a guide that helps keep the conversation between a human and an AI on track. 

The way Nvidia is thinking about AI guardrails, there are three primary categories where there is a specific need. The first category are topical guardrails, which are all about making sure that an AI response literally stays on topic. Topical guardrails are also about making sure that the response remains in the correct tone.

Safety guardrails are the second primary category and are designed to make sure that responses are accurate and fact checked. Responses also need to be checked to ensure they are ethical and don’t include any sort of toxic content or misinformation. Cohen acknowledged the general concept of AI “hallucinations” as to why there is a need for safety guardrail. With an AI hallucination, an LLM generates an incorrect response if it doesn’t have the correct information in its knowledge base. 

The third category of guardrails where Nvidia sees a need is security. Cohen commented that as LLMs are allowed to connect to third-party APIs and applications, they can become an attractive attack surface for cybersecurity threats.

“Whenever you allow a language model to actually execute some action in the world, you want to monitor what requests are being sent to that language model,” Cohen said.  

How NeMo Guardrails works

With NeMo Guardrails, what Nvidia is doing is adding another layer to the stack of tools and models for organizations to consider when deploying AI-powered applications.

The Guardrails framework is code that is deployed between the user and an LLM-enabled application. NeMo Guardrails can work directly with an LLM or with LangChain. Cohen noted that many modern AI applications use the open-source LangChain framework to help build applications that chain together different components from LLMs.

Cohen explained that NeMo Guardrails monitors conversations both to and from the LLM-powered application with a sophisticated contextual dialogue engine. The engine tracks the state of the conversation and provides a programmable way for developers to implement guardrails.

The programmable nature of NeMo Guardrails is enabled with the new Colang policy language that Nvidia has also created. Cohen said that Colang is a domain-specific language for describing conversational flows.

“Colang source code reads very much like natural language,” Cohen said. “It’s a very easy to use tool, it’s very powerful and it lets you essentially script the language model in something that looks almost like English.”

At launch, Nvidia is providing a set of templates for pre-built common policies to implement topical, safety and security guardrails. The technology is freely available as open source and Nvidia will also provide commercial support for enterprises as part of the Nvidia AI enterprise suite of software tools.

“Our goal really is to enable the ecosystem of large language models to evolve in a safe, effective and useful manner,” Cohen said. ” It’s difficult to use language models if you’re afraid of what they might say, and so I think guardrail solves an important problem.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat

Related posts
DefenseNews

After Army canceled helo program, industry had to pivot

DefenseNews

Here’s when the US Army will pick next long-range spy plane

DefenseNews

Raytheon picks Spain’s Sener to make Patriot interceptor parts

Cleantech & EV'sNews

Gogoro announces major partnership to help accelerate global expansion

Sign up for our Newsletter and
stay informed!