MobileNews

NVIDIA made an open source tool for creating safer and more secure AI models

AI models

Since March, NVIDIA has offered AI Foundations, a service that allows businesses to train large language models (LLMs) on their own proprietary data. Today the company is introducing NeMo Guardrails, a tool designed to help developers ensure their generative AI apps are accurate, appropriate and safe.

NeMo Guardrails allows software engineers to enforce three different kinds of limits on their in-house LLMs. Specifically, firms can set “topical guardrails” that will prevent their apps from addressing subjects they weren’t trained to tackle. For instance, NVIDIA suggests a customer service chatbot would, with the help of its software, decline to answer a question about the weather. Companies can also set safety and security limits that are designed to ensure their LLMs pull accurate information and connect to apps that are known to be safe.

According to NVIDIA, NeMo Guardrails works with all LLMs, including ChatGPT. What’s more, the company claims nearly any software developer can use the software. “No need to be a machine learning expert or data scientist,” it says. Since NeMo Guardrails is open source, NVIDIA notes it will also work with all the tools enterprise developers already use.

NVIDIA is incorporating NeMo Guardrails into its existing NeMo framework for building generative AI models. Business customers can gain access to NeMo through the company’s AI Enterprise software platform. NVIDIA also offers the framework through its AI Foundations service. The release of NeMo Guardrails comes after some of the most high-profile generative AIs, including Microsoft Bing and Google Bard, have come under the microscope for their tendency to “hallucination” information. In fact, Google’s chatbot made a factual error during its first public demo.

“NVIDIA made NeMo Guardrails — the product of several years’ research — open source to contribute to the developer community’s tremendous energy and work AI safety,” NVIDIA said. “Together, our efforts on guardrails will help companies keep their smart services aligned with safety, privacy and security requirements so these engines of innovation stay on track.”

If you want to read a deep dive into how NeMo Guardrails works, NVIDIA has published a blog post on the subject that also shares information on how to get started with the software.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Author: Igor Bonifacic
Source: Engadget

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!