GamingNews

Researchers want to use Mega Man 2 to evaluate AI

Games have long served as a training ground for AI algorithms, and not without good reason. Games — particularly video games — provide challenging environments against which to benchmark autonomous systems. In 2013, a team of researchers introduced the Arcade Learning Environment, a collection of over 55 Atari 2600 games designed to test a broad range of AI techniques. More recently, San Francisco research firm OpenAI detailed Procgen Benchmark, a set of 16 virtual worlds that measure how quickly models learn generalizable skills.

The next frontier might be Mega Man, if an international team of researchers have their way. In a newly published paper on the preprint server Arxiv.org, they propose EvoMan, a game-playing competition based on the eight boss fights in Capcom’s cult classic Mega Man 2. As they describe it, competitors’ goal is to train an AI agent to defeat every enemy and evaluate their performances by common metrics.

Why Mega Man 2? The paper’s coauthors assert that few other environments test an AI’s ability to win against a single enemy, or how well an AI generalizes to win matches against waves of enemies or coevolves to create increasingly difficult enemies. To this end, in EvoMan, an AI-controlled Mega Man — a robot equipped with a powerful arm cannon — must beat eight so-called Robot Masters equipped with different weapons. Every time a Robot Master is defeated, the agent acquires its weapon, making it easier to defeat the bosses who remain.

EvoMan AI benchmark

Above: A table of scores achieved by an agent in the EvoMan benchmark.

Image Credit: EvoMan

As proposed, EvoMan challenge entrants would train their agents on any four enemies and measure how well their learned strategy scales up to a whole set of enemies. The agents would be expected to learn how to identify and react to general patterns like avoiding being shot or shooting at the direction of the enemy, and to deplete an enemy’s health from 100 energy points to 0 by the end of each match.

“The main goal of this competition is that a given agent perform equally good for every boss,” wrote the coauthors, who suggest that competitors be ranked by the mean of performances across bosses. “The winner … will be the one agent that performs equally well on each one of the eight bosses, hopefully defeating them all.”

Atari titles and Mega Man 2 are far from the only games against which AI has been trained and evaluated. In July, a paper published by Facebook researchers posited that Minecraft was well-suited to refining natural language understanding algorithms, and both OpenAI and Google parent company Alphabet’s DeepMind have fine-tuned systems on Valve’s Dota 2 and Activision Blizzard’s Starcraft 2. Separately, scientists at Facebook AI Research, the Lorraine Research Laboratory in Computer Science and its Applications, and the University College London are developing LIGHT, an open source research environment in the form of a large-scale, crowdsourced text adventure within which AI and humans interact as player characters.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Airspeeder teases a new modular solar-powered vertiport with 360 views of eVTOL 'flying car' races

Cleantech & EV'sNews

Tesla (TSLA) goes back to 0% financing but forces you to buy 'Full Self-Driving

Cleantech & EV'sNews

ZEEKR expands Europem presence by launching in Norway, beginning with two EV models

CryptoNews

Bitcoin Surge Looms as US Election Nears — Standard Chartered Predicts BTC Rally to $73,800 – Markets and Prices Bitcoin News

Sign up for our Newsletter and
stay informed!