Different Time Systems

Hello everyone,
I attended IRDC last weekend. There was a talk by Brett Gildersleeve (the developer of Rogue Space Marine and Helix). The talk is called Real Time Synchronous Turn Systems in Roguelikes. It’s about analyzing the current turn base systems in Roguelike and comparing it to his game Rogue Space Marine. This talk inspired me to write about different systems and which games use which. His talk was astonishing but missing lots of ideas that can be done (It just covered the classic stuff). Here is the ideas about the systems:


Asynchronous Turn Based System: The player plays and after it finished each enemy play its turn. This is the most classic technique used in lots of games (NetHack, Moria, Dungeon Dashers, and …etc).

Synchronous Turn Based System: The player plays at the same time of the enemy. The system resolve the collision by some precedence. Some games adds a speed parameter to have different enemies. Every enemy or npc move related to the player so if he is slower this means he might take more than 1 turn to move but if he is faster than the player he might be moving two tiles with every one player move. For example Cardinal Quest and Super-W-Hack!.

Real Time System: Everything runs at each frame. Everything is continuous and running smooth. The collision happens at each frame. For example Nuclear Throne, The Binding of Isaac, Spelunky, and …etc.

Real Time Synchronous System: This is the system he was talking about in his game. The player move in tile based and when he is moving everything else work in real time (enemy bullets can be avoided). For example Rogue Space Marine.

Bullet Time System: The game move in real time till enemies come near then the game goes in bullet time. Right now there is no roguelike use this system but it would be amazing if some one used it. The current game used it is SuperHot.

Physics Based Turn System: The player move with a speed and the turn ends when the physics stops simulation. For example Billiard Dungeon.

Real Time Pause System: Everything runs in real time and if the player didn’t move nothing move. For example The Spooky Cave.

Real Time Rhythm System: Everything moves on a rhythm if the player didn’t move enemies will move. Missing the rhythm make enemies move while player is still at same location. For example Crypt of the Necrodancer.

Real Time Slot Pause System: The game is totally paused where the player can make all his decisions then the game turns into real time for a fixed time slot. The only game that used that is not a roguelike but it would be cool in a Roguelike. The only game is The Random Encounter.

Time Accumulation Synchronous System: The game saves the amount of time the user not doing his turn and when he plays it repeat the action for the this amount of time. It seems weird and complicated but we might make maximum of the saved time. Its not done before and I would love to try doing it at one day. But if this system is used to penaltize the player it might be similar to Real Time Rhythm System (when not doing anything the game penaltize the player by moving the enemies).

These are all the systems I can think of that can be used in roguelike games. I think I need to organize it and make like 2D matrix for the different parameter that can be mixed.

Bye Bye


GECCO16: General Video Game Level Generation

Hello everyone,
This is my talk in GECCO16 for our paper “General Video Game Level Generation“.

GVG-LG Framework GECCO.001

Hello everyone, I am Ahmed Khalifa a PhD student at NYU Tandon’s school of engineering and today I am gonna present my paper General Video Game Level Generation

GVG-LG Framework GECCO.002

We want to design a framework that allows people to develop general level generator for any game described in video game description language.

GVG-LG Framework GECCO.003

So what is level generation? It using computer algorithm to design levels for games. People in industry have been using it since very long time. At the beginning the reason due to technical difficulties but now to provide more content to user and it enables a new type of games. The problem with level generation is all the well known generators are designed for a specific game. So it depends on domain knowledge to make sure levels are as perfect as possible. Doing that for every new game seems a little exhaustive so we wanted to have on single algorithm that can be used on multiple games. In order to do that we need a way to describe the game for the generators.

GVG-LG Framework GECCO.004

That’s why we are using Video Game Description Language. It’s a scripting language that enables creating different games super easily. So the script written is for that game which is kinda like dungeon system from legend of Zelda. VGDL is simple u need to define 4 sections, SpriteSet, LevelMapping, InteractiveSet, and TerminationSet. SpriteSet to define game objects such as goal, interactionset to define the interactions between different objects, termination set define how the game should end.

GVG-LG Framework GECCO.005

Here some extra examples that shows that vgdl is able to design different games such as freeway where the user try to reach the exit by crossing the street.

GVG-LG Framework GECCO.006

Snowman which is a puzzle game where the player try to build a snowman at the end position by placing the peices correctly. It’s inspired by a snowman is hard to build a famous Indiegame about creating snowmen and giving them names and hugging them.

GVG-LG Framework GECCO.007

Let’s get back to the framework, we want to create it as simple as possible so each user write a level generator where he gets a vgdl description of the game and he has to return a generated level

GVG-LG Framework GECCO.008

The game description object give the user all the information in vgdl and also give tags to the sprites according to their interaction. For example if object block the player movement and don’t react with anything else, is considered solid, if is the player then tagged as avatar, if it could kill the player then it’s a harmful object, collectible object is stuff that get destroyed

GVG-LG Framework GECCO.009

We designed 3 different level generations and shipped them with the framework, the first one is the random generator where it place all game objects in different location on the map

GVG-LG Framework GECCO.010

The second one is the constructive generator where it tries to use information available and design a good level. So it starts by calculate percentages for each object type then it draws a level layout, then place the avatar at random location then place harmful object in a position proportional to the distance to the avatar, then add goal objects. After that it goes through a post processing where it add more goal objects if needed to satisfy goal constraints.

GVG-LG Framework GECCO.011

The third one is a search based generator where it uses feasible infeasible 2 population genetic algorithm to generate the level. We populated with levels generated from the constructive algorithm to ensure that GA will find better levels. We use one point crossover where two levels are swapped around certain point. While mutations used is create where it creates random object at random tile, delete: deletes all objects from a random position,

GVG-LG Framework GECCO.012

We used several constraints to ensure playability of the levels. First one is cover percentage so we never generate an over populated levels or very sparse levels

GVG-LG Framework GECCO.013

Make sure there is one avatar becz if it’s not there it’s impossible to play the game

GVG-LG Framework GECCO.014

Sprite number constraint where it makes sure there is at least one sprite from each type, to make sure the levels are playable. For example if there is no key u can’t win the level

GVG-LG Framework GECCO.015

Goal Constraint to make sure that all goal conditions are not satisfied becz if they are satisfied then u win when the game starts

GVG-LG Framework GECCO.016

Death constraint is to make sure for the 40 game steps if the player did nothing he won’t die becz if the player dies at the beginning it’s unplayable game becz humans are not as fast as expected

GVG-LG Framework GECCO.017

Last one is win constraint and solution length is make sure the levels are winnable and not straight forward win, the best agent takes amount of steps before finishing.

GVG-LG Framework GECCO.018

These are all the constraints, about the fitness, it consists of two parts. The first part is score difference fitness inspired from Nielsen work. He found that the difference in score between the best agent (Explorer) and his worst agent (DoNothing) is big in good designed levels compared to random generated ones. We used the difference between our best agent and worst agent for that.

GVG-LG Framework GECCO.019

The second fitness is unique rule fitness, we noticed by analyzing all games that agents playing good designed levels encounter more unique intersection rules than random designed levels.

GVG-LG Framework GECCO.020

In order to verify that we did a user study, where the user plays two levels from a certain game and choose which is better. We used 3 different games without including puzzle games as there is no agent excels in playing puzzle games so far.

GVG-LG Framework GECCO.021

The results shows that users prefered search based levels over constructive which was expected while they prefered random over constructive without huge significance.

GVG-LG Framework GECCO.022

One of the main reason we think is people can’t differentiate between constructive and random but random always ensure all sprites there which make its levels playable while constructive although it looks better sometimes it misses a key or something.

GVG-LG Framework GECCO.023

We ran a level generation competition one week ago, we had 4 competitors (1 based on cellular automata, 3 based on search based approaches)

GVG-LG Framework GECCO24.001

It was simple because the cellular automata didn’t put tons of objects in the level which made it somehow playable while all the others overloaded the levels with tons of object so players die insatiately

GVG-LG Framework GECCO.025

Our future work, is enhancing search based generator to get better designed levels, enhance the framework based on competitors feedback, rerunning the competition again and encouraging more people to participate.

GVG-LG Framework GECCO.026

That’s it, Thanks.

That’s it for now.
Bye, Bye


IJCAI16 Talk: Modifying MCTS for Humanlike Video Game Playing

Hello everyone,
Ages since last post 😀 on Thursday July 14th I gave a talk about my paper “Modifying MCTS for Humanlike Video Game Playing” with Aaron Isaksen, Andy Nealen, and Julian Togelius at IJCAI16. Thanks to Aaron, he captured a video of my talk. Here is it:

Also we did a poster for the conference which looked amazing. Here is the poster:

Humanlike MCTS Poster.001

If the video is not clear, I am posting the slides here with my comments:

Humanlike MCTS New.001

Hello everyone, I am Ahmed Khalifa, PhD student at NYU Tandon’s School of Engineering. Today I am gonna talk about my paper “Modifying MCTS for Humanlike Video Game Playing”.

Humanlike MCTS New.002

We are trying to modifying Monte Carlo Tree Search algorithm to play different games like human player. We are using General Video Game Playing Framework to test our algorithm.

Humanlike MCTS New.003

Why do we need that? One important reason is create humanlike NPCs. One of the reason people play Multiplayer games is the lack of realistic NPCs to play with or against. Also evaluating levels and games for AI-assisted tools. For example if you gave these two levels to a human, he will pick the one on the left as its playable by human while the one of the right is super difficult, it might even be impossible to be played.

Humanlike MCTS New.004

Before we start, whats general video game playing which we are using its framework. Its a competition for general intelligence where competitors create different agents that plays different unseen games. These games are written in a scripting language called Video Game Description Language. Every 40msec the agent should select one of the available actions. Like up, down, left, right, use button, nil which is do nothing. A game play trace is a sequence of actions.

Humanlike MCTS New.005

Here are two videos that shows the difference between human player and MCTS agent. On the left you can see humans tends to go towards their goal and only do actions when necessary (for example only attack when monster is near). While MCTS agent on the right is stuck in the upper left corner moving in walls, attacking the air and walls.

Humanlike MCTS New.006

By analyzing the play traces for both human players and MCTS agent on different games. We found that humans tends to repeat the same action multiple times before changing. In the first graph it shows human have tendency to repeat the same action twice by 50%. Also humans tends to use more NILs and tends to repeat it more during the play trace. While in the third graph it shows the MCTS have a higher tendency to change actions more often than humans. Humans 70% of the time don’t change their action.

Humanlike MCTS New.007

In order to achieve similar play traces, we need to modify MCTS. These are the main 4 steps for MCTS.

Humanlike MCTS New.008

We tried to modify each step on its own but none of them have a big change in the distribution except for the selection step.

Humanlike MCTS New.009

Selection step depends on UCB equation to select the next node.

Humanlike MCTS New.010

UCB equation consists of two terms, exploitation term and exploration term. The exploitation term bias the selection to select the best node while the exploration term push MCTS to explore less visited nodes.

Humanlike MCTS New.011

We modified the equation by adding a new bonus term which consists of 3 parts:
Human Modeling
Trivial Branch Pruning
Map Exploration Bonus
Also we modified the Exploitation term with a MixMax term.
We are going to explain all these terms in details in the upcoming slides.

Humanlike MCTS New.012

We added a bonus value that shift the MCTS distribution to be similar to human distribution. As you see from the video the agent tends to repeat the same action and do more NILs with lower action to new action frequency. But as we see, it is still stupid, stuck in the corner, attacking air, moving into walls.

Humanlike MCTS New.013

Thats why we added the second term which avoids selecting stupid nodes (like attacking walls and moving into walls) As we see the agent stopped attacking the air and whenever it get stuck in a wall, it changes the action or apply nil. But its still stuck in the corner.

Humanlike MCTS New.014

So we added a bonus term that reward nodes that have less visited positions on the map. As we can see the agent now go every where and explore. But as we see the agent is coward, it avoids attacking the spiders.

Humanlike MCTS New.015

So we used MixMax term instead of the exploitation term which use the mixture between the best value and the average value of the node instead of the average value only. As we can see the agent become courageous and moves towards the enemies and kill them.

Humanlike MCTS New.016

Analyzing the play traces after all these modifications. our BoT (Bag of Tricks) algorithm tends to be more similar to human compared to MCTS in action repetition, nil repetition. Also having less action to new action changes.

Humanlike MCTS New.017

In order to verify these results we conducted a user study. In the study, each user watch two videos and he was to specify which is more likely to be human and why?

Humanlike MCTS New.018

From the study our BoT algorithm was more human than MCTS but still not as good as humans, except for PacMan where deceived the humans by 60%.

Humanlike MCTS New.019

When we analyzed the human comments we found that the main reason for recognizing agent are the same as we stuff we tries to solve. Jitterness (changing directions very quickly), Useless moves (attacking walls, moving into them), No long term planning (stuck in one of the corners), too fast reaction time, over confidence (killing all enemies and become over achiever)

Humanlike MCTS New.020

Thanks for listening.

That’s everything for now.