Reinforcement learning, an AI training technique that employs rewards to drive software policies toward goals, has been applied successfully to domains from industrial robotics to drug discovery. But while firms including OpenAI and Alphabet’s DeepMind have investigated its efficacy in video games like Dota 2, Quake III Arena, and StarCraft 2, few to date have studied its use under constraints like those encountered in the game industry.
That’s presumably why Ubisoft La Forge, game developer Ubisoft’s eponymous prototyping space, proposed in a recent study an algorithm that’s able to handle discrete, continuous video game actions in a “principled” and predictable way. They set it loose on a “commercial game” (likely The Crew or The Crew 2, though neither is explicitly mentioned) and report that it’s competitive with state-of-the-art benchmark tasks.
“Reinforcement Learning applications in video games have recently seen massive advances coming from the research community, with agents trained to play Atari games from pixels or to be competitive with the best players in the world in complicated imperfect information games,” wrote the coauthors of a paper describing the work. “These systems have comparatively seen little use within the video game industry, and we believe lack of accessibility to be a major reason behind this. Indeed, really impressive results … are produced by large research groups with computational resources well beyond what is typically available within video game studios.”
The Ubisoft team, then, sought to devise a reinforcement learning approach that’d address common challenges in video game development. They note that data sample collection tends to be a lot slower generally, and that there exist time budget constraints over the runtime performance of agents.
Their solution is based on the Soft Actor-Critic architecture proposed early last year by researchers at the University of California, Berkeley, which is more sample-efficient than traditional reinforcement learning algorithms and which robustly learns to generalize to conditions that it hasn’t seen before. They extend it to a hybrid setting with both continuous and discrete actions, a situation often encountered in video games (e.g., when a player has the freedom to perform actions like moving and jumping, each of which are associated with parameters like target coordinates and direction).
The Ubisoft researchers evaluated their algorithm on three environments designed to benchmark reinforcement learning systems, including a simple platformer-like game and two soccer-based games. They claim that its performance fell slightly short of industry-leading techniques, which they attribute to an architectural quirk. But they say that in a separate test, they successfully used it to train a video game vehicle with two continuous actions (acceleration and steering) and one binary discrete action (hand brake), the objective being to follow a given path as fast as possible in environments the agent didn’t encounter during training.
“We showed that Hybrid SAC can be successfully applied to train a car on a high-speed driving task in a commercial video game,” wrote the researchers, who futher noted that their approach can accommodate a wide range of potential ways for an agent to interact with a video game environment, such as when the agent has the same inputs as a player (whose controller might be equipped with an analog stick that provides continuous values and buttons that can be pressed to yield discrete actions through combinations). “[This demonstrates] the practical usefulness of such an algorithm for the video game industry.”
Author: Kyle Wiggers
Source: Venturebeat