Mega Man Being Used to Evaluate AI

As the development of artificial intelligence (AI) advances, the need for a reliable benchmark is of growing importance. Typically AIs are tested in a plethora of games to have an AI that is acceptable for most scenarios. However AI are often not tested in scenarios that are unfamiliar to them and therefore are not considered “True AI.” Video games are a go-to choice for testing AI. Testing AI in environments that they are trained to operate in often gives positive results, while AI tend to fail in unfamiliar environments, yielding higher accuracy in the collected data.

Via Analytics India Magazine, researchers at the Heuristics, Analysis and Learning Laboratory (HAL) at the Federal University of ABC in Brazil have started using Mega Man 2 as the testing environment to test AIs. Mega Man 2 was chosen with the objective for the AI to use an agent and defeat the eight robot masters: Metal Man, Air Man, Bubble Man, Quick Man, Crash Man, Flash Man, Heat Man, and Wood Man. The framework for the test is dubbed “EvoMan.”

Mega Man 2 is considered to be difficult among players. However, unlike human players who are given a new weapon after defeating a boss, the AI is given control of Mega Man who will be only equipped with the default arm cannon. In the EvoMan framework developers were allowed to teach and train their AI against four bosses, but not all eight. The AI is supposed to develop a general model that can defeat all the opponents by reacting to incoming attacks and shooting in the direction of the enemy. Each boss has a unique patterns of movement and attacks which make developing a universal AI for all the bosses difficult.

The AI agent will have a total of twenty sensors that track distances from the agent to the enemy boss, bullets, and which directions both parties are facing. When pitted in battle, the AI agent and the boss will have 100 energy. Each attack that is landed depletes a point, and whoever reaches zero life points first loses the match. At the end of the performance the AI is evaluated with a maximization problem, evaluating the energy gain difference between the AI agent and the boss.

The abstract explains that the equation being used is Gain=100.01+epee. “EP” is the final energy of the AI agent while “EE” is the energy remaining for the enemy boss. According to the abstract, the value of 100.01 is added so that the harmonic mean across all bosses will produce valid result. The intended goal is to not score the highest points possible, but to rather have a consistent score across all bosses. The harmonic mean is depicted by a NEAT algorithm with two-layer neural network that was weighted by the genetic algorithm.

Video games have a history of being used to teach AI models generalized learning in an attempt to create “True AI.” Deepmind developed an AI that defeated 99.8% of human players it was paired up against in Starcraft 2, and OpenAI used hundreds of Atari games in method called Arcade Learning Environment to overcome its overfitting issue. The overfitting issue in machine learning is when model is trained on a specific set but fails to generalize and under-preforms with unfamiliar sets of data. The inverse of overfitting is called underfitting where a model generalizes too much. By using Mega Man 2 in the EvoMan framework researchers have created a more reliable benchmark to test the general learning of AI. EvoMan is open for the testing of any AI.

Griffin Gilman: Gaming may very well be half of my personality, so it is only natural that I write about them. The best genre is RPGs, while the best game is Nier Automata. That's not an opinion but a matter of facts.
Related Post