AI & RoboticsNews

OpenAI releases Safety Gym for reinforcement learning

While much work in data science to date has focused on algorithmic scale and sophistication, safety — that is, safeguards against harm — is a domain no less worth pursuing. This is particularly true in applications like self-driving vehicles, where a machine learning system’s poor judgement might contribute to an accident.

That’s why firms like Intel’s Mobileye and Nvidia have proposed frameworks to guarantee safe and logical decision-making, and it’s why OpenAI — the San Francisco-based research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, and others — today released Safety Gym. OpenAI describes it as a suite of tools for developing AI that respects safety constraints while training, and for comparing the “safety” of algorithms and the extent to which those algorithms avoid mistakes while learning.

Safety Gym is designed for reinforcement learning agents, or AI that’s progressively spurred toward goals via rewards (or punishments). They learn by trial and error, which can be a risky endeavor — the agents sometimes tries dangerous behaviors that lead to errors.

As a remedy, OpenAI proposes a form of reinforcement learning called constrained reinforcement learning, which implements cost functions that the AI must constrain. In contrast to common practice, where an agent’s behavior is described by a function that’s tailored to favor objectives, constrained agents figure out trade-offs that achieve certain defined outcomes.

“In normal [reinforcement learning], you would pick the collision fine at the beginning of training and keep it fixed forever,” OpenAI explains in a blog post. “The problem here is that if the pay-per-trip is high enough, the agent may not care whether it gets in lots of collisions (as long as it can still complete its trips) … [But in] constrained [reinforcement learning,] you would pick the acceptable collision rate at the beginning of training, and adjust the collision fine until the agent is meeting that requirement.”

To this end, Safety Gym introduces environments that require AI agents — Point, Car, Doggo, or a custom design — to navigate cluttered environments to achieve a goal, button, or push task. There’s two levels of difficulty, and each time an agent performs an action that’s unsafe (i.e., runs into clutter), a red warning light flashes around the agent and the agent incurs a cost.

Safety Gym ships with standard and constrained reinforcement learning algorithms out of the box in addition to the code used to run experiments, and OpenAI says that preliminary results demonstrate the range of difficulty in Safety Gym environments. The simplest environments are relatively easy to solve and allow for fast iteration, while the hardest environments might be too challenging for current techniques.

OpenAI leaves to future work improving performance on current Safety Gym environments, using Safety Gym to investigate safe AI training techniques, and combining constrained reinforcement learning with implicit specifications like human preferences. It also hopes to contribute to the formulation of a metric that might measure the “safety” of AI systems.

“[Safety metrics] could feasibly be integrated into assessment schemes that developers use to test their systems, and could potentially be used by the government to create standards for safety,” wrote OpenAI. “We … hope that systems like Safety Gym can make it easier for AI developers to collaborate on safety across the AI sector via work on open, shared systems.”


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!

Worth reading...
How to check timezones on iPhone, iPad, Mac, and Apple Watch