ELIMINATION from Design to Analysis

This is a postmortem for our game ELIMINATION (you can try it here: http://akhalifa.com/elimination/ on Browser, Android, or iOS devices) talking about the different design decisions during creating the game and the level generator and validating some of these theories using players’ data. This post is an adapted version of our (Ahmed Khalifa, Dan Gopstein, and Julian Togeliuspaper that was submitted as a short paper for CoG conference.

Using procedural content generation (PCG) in games, especially for core game elements such as levels, is a bit of an art. When it is executed well, it enhances the player experience and keeps the player wanting to play again [1] but any mistake in the system and the game could feel boring, repetitive and might lead to disappointment [2]. Just like game design, PCG systems are designed using designers’ intuition and a trial and error process. Designers usually test the system on a group of players and based on the feedback they adjust the generator until it works as intended [3].


The Sky House Design – Part 1

Hello Everyone!
In the following post, I will be describing my process in designing my latest game, “The Sky House“.

First of all, I would like to thank Joni Kittaka, Ben Benal, Michael Cook, Chris DiMauro, Michael Fewkes, and Fernando Silva for their feedback that helped in improving the overall game and making it more accessible and less evil :D.

I will divide this topic into three separate blog posts because “The Sky House” consists of 42 scenes which would take too much time to write and too much space for just one. This series of posts contains a lot of spoilers, so please play the game first before reading.

The idea for this game started a long time ago after I finished a tiny game with Samer Abbas for the GameZanga (GameZanga is an Arabic game jam that happens every year in the MENA region) called “Loss“. I liked the idea of having a winged character with very precise controls that allows the player to avoid most of the traditional platform’s challenges (as I am not good with most platformers that utilize momentum). For a long time, I was trying to find the best environment to introduce this character until I remembered playing a game by “Roger Hicks” called, “Celestial Mechanica“. I remembered how much I was fascinated by the idea of the flying citadel (you can notice the similarity between that and my flying house). At that time, “PICO-8” was becoming very famous and I wanted to experiment with it before buying. As a PhD student, I have very little money in general. When I Googled “free alternatives,” I found “TIC-80” which is an fantastic open source fantasy console. I decided to build my game in TIC-80.

The above images show my first draft of the game’s main story, the challenges, and the overall map which has connections inspired by “The Legend of Zelda” and “Metroid” series. I wanted the player to be able to see the different locked branches from the beginning of the game. I also wanted to use the “soft locks” idea similar to the one used in the first “The Legend of Zelda” and the latest, “Breath of the Wild” for controlling the order of visiting the different dungeons. Locks in general are used to guide the player towards an optimal way to finish the game. Soft locks use very difficult challenges for the player’s current skill level. Soft locks provide the feeling of openness instead of having better guidance while hard locks are used to have a more carefully guided experience.

In the image above, you will see the final overall game map as represented in the TIC-80 editor. You can see the similarity of the connections between the initial map draft and this one, however, I changed a little bit of the idea of using soft locks. I only used the soft locks for secrets, hidden areas, and dead end areas. I made sure that there is always something to be done in any dead end path to give the player a sense of achievement and reward. The player can find collectible coins, shortcuts, or hints for the secrets in other rooms. I also made certain that any secret in the game is hinted to, somehow, either by using the art or the layout of the level. Doing so helps the players to find the secrets without spending time bumping into every wall in the game.

Most of my knowledge in designing this game came from reading Anna Anthropy‘s amazing book “A Game Design Vocabulary” and playing “REDDER” and “VVVVVV“. The images above are notes from my notebook after I got my first feedback on the game, and were used to redesign some of the challenges to make it more focused on the core verbs (mechanics). “The Sky House“‘s core verbs can be seen in the first page: Fly, Move, Jump, Drop Slowly (Gliding), and Drop Fast. Every challenge in the game revolves around these verbs – especially Fly and Glide as these are what make “The Sky House” stand out among similar games. You will also notice that I decided not to use any text inside the game either for story or for guiding the player, similar to the approach used in “REDDER“.

This image shows the game’s title screen. The game starts by pressing the “X” button twice instead of one time to give a hint to the player that pressing “X” twice is an essential element of the game.

To easily explain the rooms in the game, the overall map shows rooms with corresponding numbers beside them. The numbers are written in same order that I will explain the rooms. To jump to a specific room, click the links below:

Room1 Room2 Room3
Room4 Room5 Room6
Room7 Room8 Room9
Room10 Room11

I would like to note that the world of the game is divided into multiple regions – like in “Metroid” – with each region having a different set of challenges ramping from easy to hard as the player advances, ending with the player getting a key. In this first part of the blog series, you will notice that Room1 to Room5 are the first region while Room6 to Room11 are the second region. The first region helps the player to learn the basic verbs and uses spikes as the core challenge. The second region is more about precise control and timing, adding on laser challenges.
<!--, Room13, Room14, Room15, Room16, Room17, Room18, Room19, Room20, Room21, Room22, Room23, Room24, Room25, Room26, Room27, Room28, Room29, Room30, Room31, Room32, Room33, Room34, Room35, Room36, Room37, Room38, Room39, Room40,



Different types of Tutorials

This blog post is the background section from our paper (I, Michael Green, Gabriella Barros, and Julian Togelius). The paper proposes the problem of tutorial generation for games, i.e. to generate tutorials which can teach players to play games, as an AI problem. The background of the paper talks about the history of tutorials and their different types. I hope this post will help developers and designers to design better tutorials for their games.

Tutorials are the first interactions players encounter in a game. They help players understand game rules and, ultimately, learn how to play with them. In the game industry, developers experimented with different tutorial formats [1]. In the arcade era, when most games were meant to be picked up and played quickly, they either had very simple mechanics, or they contained mechanics that players could relate to: “Press right to move”, “Press up to jump”, and so on. As a result, these games usually lacked a formal tutorial. As their complexity increased and home consoles started to explode in popularity, formal tutorials became more common.

Some game developers tried using an active learning approach which was optimized for players that learn through experimentation and exploring carefully designed levels. Games like Megaman X (Capcom, 1993) follow this approach. Other developers relied on old-school techniques, teaching the player everything before they could play the game, such as in Heart of Iron 3 (Paradox Interactive, 2009). While one cannot argue that one technique is always superior to another, different techniques suit different audiences and/or games [2] [3] [4].

Tutorials have evolved significantly over time, from the simple directive of Pong (“Avoid missing the ball for highscore”) to the exquisitely detailed in-game database of Civilization
[1]. Suddaby describes multiple types of tutorials [5], from none at all to thematically relevant contextual lessons, where the tutorial is ingrained within the game environment.

Tutorial types are related to the different learning capabilities of the users who play them. Sheri Graner Ray (2010) discusses different knowledge acquisition styles in addition to traditional learning styles: Explorative Acquisition and Modeling Acquisition. The first style incorporates a childlike curiosity and “learning by doing”, whereas the second is about knowing how to do something before doing it. We can define at least two distinct tutorial styles from this, one being exploratory during gameplay and the other being more instructional before the game even begins.

Williams suggests that active learning tutorials, which stress player engagement and participation with the skills being learned, may be ineffective when the player never has an isolated place to practice a particularly complex skill [3]. In fact, Williams argues that some active learning tutorials actually ruin the entire game experience for the player because of this reason. According to Andersen et al., the effectiveness of tutorials on gameplay depends on how complex a game is to begin with [2], and sometimes are not useful at all. Game mechanics that are simple enough to be discovered using experimental methods may not require a tutorial to explain them. From these two sources, we find our first two boundaries for tutorial generation: there exists mechanics that are too simple to be taught in a tutorial, and there are mechanics complex enough that they may need to be practiced in a well-designed environment to hone.

In general, a game developer would want to use the most suitable tutorial style for their game. For that purpose, they must understand different dimensions/factors that affect the tutorial design process and outcome. Andersen et al. [2] measured how game complexity affects the perceived outcome of tutorials. In their study, they defined 4 dimensions of tutorial classification:

  • Tutorial Presence: whether the game has a tutorial or not.
  • Context Sensitivity: whether the tutorial is a part of story and game or separate and independent from them.
  • Freedom: whether the player is free to experiment and explore or is forced to follow a set of commands.
  • Availability of Help: whether the player can request for help or not.

The classification proposed by Andersen et al. is binary. However, it is useful to see tutorials situated on a continuum between these extremes, as this allows us to gain a more nuanced understanding of game tutorials. For example: The above figure shows the tutorial in Braid (Number None, Inc, 2008) for a time rewinding mechanic. The tutorial only appears based on a certain event, i.e. the player’s death. Players will not know about the mechanic until their first death. Instead of having the tutorial available at anytime or showing how to use the mechanic at the beginning, the developer reveals it when it is first necessary.

Sampling this space and comparing it with current game tutorials, we can find patterns repeated in multiple games. We can highlight the following tutorial types, which are not the only tutorials present in the space, but appear to be the most common ones:

  • Teaching using instructions: These tutorials explain how to play the game by providing the player with a group of instructions to follow, similar to what is seen in boardgames. For example: Strategy games, such as Starcraft (Blizzard, 1998), teach the player by taking them step by step towards understanding different aspects of the game.
  • Teaching using examples: These tutorials explain how to play by showing the player an example of what will happen if they do a specific action. For example: Megaman X uses a Non Playable Character (NPC) to teach the player about the charging skill [6].
  • Teaching using a carefully designed experience: These tutorials explain how to play the game by giving the player freedom to explore and experiment. For example: in Super Mario Bros (Nintendo, 1985), the world 1-1 is designed to introduce players to different game elements, such as goombas and mushrooms, in a way that the player can not miss [8]. One way of seeing that is that early obstacles are instances of patterns, which reoccur later in the game in more complex instantiations or combinations [7].

A game can have more than a single tutorial type from the previous list. Arcade games used demos and instructions to both catch the attention of the player and help them learn it. The demos help to attract more players, while simultaneously teaching them how to play. On the other hand, showing an information screen before the game start, such as in Pacman (BANDAI NAMCO, 1980), or displaying instructions on the arcade cabin, frequently seen in fighting games, helps the player understand the game and become invested in it. The above figure shows Street Fighters arcade cabinet where different characters combos and moves are written on it. Megaman X uses a carefully designed level to teach the player what to do, but still gives an example of the powershot attack if the player missed it.

[1] Therrien, C. 2011. “To get help, please press x” the rise of the assistance paradigm in video game design. In DiGRA Conference.
[2] Andersen, E.; O’Rourke, E.; Liu, Y.-E.; Snider, R.; Lowdermilk, J.; Truong, D.; Cooper, S.; and Popovic, Z. 2012. The impact of tutorials on games of varying complexity. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 59–68. ACM.
[3] Williams, G. C. 2009. The pedagogy of the game tutorial..
[4] Ray, S. G. 2010. Tutorials: learning to play..
[5] Suddaby, P. 2012.

The many ways to show the player how it’s done with in-game tutorials.


Game Engines Galore

Hello everyone,

Before I start, I just want to say, this post is reflecting my own opinions about different game engine and it is not intended to be taken beyond that scope.

I had a chat with Dan (a friend of mine at the Game Innovation Lab) about how many new game developers don’t know a lot about different game engines. Once the discussion ended, he asked me to post about game engines, and rate them according to difficulty and complexity. Nowadays there are more game engines than in the early days of indiegame development. The funny thing is only a few of them are popular such as Unity, Unreal Engine, and Game Maker.

To list all engines, I am going to divide them into categories: Educational (made for children), Specialized (for a single genre/type of games), and Generic engines. This categorization is just to help you find the best tool for what you are doing, but really any tool that allows you to add some programming, either by writing code or using logic trees (logic trees are a way of programming by building a tree of conditions), can be used to create any game. The problem is using any of these tools outside its intended scope is harder than using an other tool. Engines vary in popularity. Popular game engines have a big user base which helps a lot when you have bugs. While using unpopular engine can be a good thing as you work very close to the developers themselves and you can request changes easier from them. Also, these developer can help you in promoting your game more than popular engines because the success of your game is also reflect the success of their engine.

Warning, this is a long post and if you don’t have time, you can jump to a summary table that outlines them all at the end of the post (link).

Educational Engines:

These are tools designed to encourage people to design games and to help them to learn how to do it.

  • Bitsy (http://ledoux.io/bitsy/editor.html): is a simple browser based game editor that doesn’t need any coding or even logic. The tool itself has predefined types of sprites and actions. The user needs only to draw different images and define dialogues for each game character, connect everything together, and TADAAAAAA, you finished a simple html game. All games created by it are topdown story based games where the player can talk to different objects.
  • Dungeon Decorator (https://lorenschmidt.itch.io/dungeon-decorator): is similar to Bitsy, except it designs platformers. The user needs to design the map, some dialogues, and they have a game. All games created by the tool are platformer story based games where the player can talk to different objects.
  • Scratch (https://scratch.mit.edu/): is an MIT tool that helps children to create stories, animation, and games. The tool allows you to program your own logic, by designing logic trees and attaching them to different objects in the scene. Scratch is more generic than you can expect, but it is hard to design very complicated games using it. Scratch produces html games to be played in the web, and all the created games are hosted on their website. You can check their top games here (https://scratch.mit.edu/studios/121998/)

Specialized Engines:

These are tools designed to prototype certain game genres/types in a very fast/efficient way.


Different Time Systems

Hello everyone,
I attended IRDC last weekend. There was a talk by Brett Gildersleeve (the developer of Rogue Space Marine and Helix). The talk is called Real Time Synchronous Turn Systems in Roguelikes. It’s about analyzing the current turn base systems in Roguelike and comparing it to his game Rogue Space Marine. This talk inspired me to write about different systems and which games use which. His talk was astonishing but missing lots of ideas that can be done (It just covered the classic stuff). Here is the ideas about the systems:


Asynchronous Turn Based System: The player plays and after it finished each enemy play its turn. This is the most classic technique used in lots of games (NetHack, Moria, Dungeon Dashers, and …etc).

Synchronous Turn Based System: The player plays at the same time of the enemy. The system resolve the collision by some precedence. Some games adds a speed parameter to have different enemies. Every enemy or npc move related to the player so if he is slower this means he might take more than 1 turn to move but if he is faster than the player he might be moving two tiles with every one player move. For example Cardinal Quest and Super-W-Hack!.

Real Time System: Everything runs at each frame. Everything is continuous and running smooth. The collision happens at each frame. For example Nuclear Throne, The Binding of Isaac, Spelunky, and …etc.

Real Time Synchronous System: This is the system he was talking about in his game. The player move in tile based and when he is moving everything else work in real time (enemy bullets can be avoided). For example Rogue Space Marine.

Bullet Time System: The game move in real time till enemies come near then the game goes in bullet time. Right now there is no roguelike use this system but it would be amazing if some one used it. The current game used it is SuperHot.

Physics Based Turn System: The player move with a speed and the turn ends when the physics stops simulation. For example Billiard Dungeon.

Real Time Pause System: Everything runs in real time and if the player didn’t move nothing move. For example The Spooky Cave.

Real Time Rhythm System: Everything moves on a rhythm if the player didn’t move enemies will move. Missing the rhythm make enemies move while player is still at same location. For example Crypt of the Necrodancer.

Real Time Slot Pause System: The game is totally paused where the player can make all his decisions then the game turns into real time for a fixed time slot. The only game that used that is not a roguelike but it would be cool in a Roguelike. The only game is The Random Encounter.

Time Accumulation Synchronous System: The game saves the amount of time the user not doing his turn and when he plays it repeat the action for the this amount of time. It seems weird and complicated but we might make maximum of the saved time. Its not done before and I would love to try doing it at one day. But if this system is used to penaltize the player it might be similar to Real Time Rhythm System (when not doing anything the game penaltize the player by moving the enemies).

These are all the systems I can think of that can be used in roguelike games. I think I need to organize it and make like 2D matrix for the different parameter that can be mixed.

Bye Bye


IRDC NYC 2016 – Day 1

Hello everyone,
I went to the International Roguelike Developer Conference (IRDC 2016). It was fun, you can watch it on twitch and they are gonna stream tomorrow (link).

Here is the recap on the talks:

Markov Text Generation:
Caves of Quds text/books/tomes/realms/lore are generated using Markov Chain models. They use Markov Chain models to generate paragraphs (3 to 6 sentences) and books (4 to 8 paragraphs). Some other people do two direction Markov Chain instead of one direction Markov Chain. For book titles they used a template filling like Tracery but this technique is kinda limited so they replaced it by a generated sentence from Markov Chain models with limited length then shove off the unwanted words from the beginning and the end of the sentence. For hidden secrets in the book, he generate all the secrets first then he add them to the Markov Chain model. Also he told to check (The Annals of the Parrigues) by Emily Short.

Writing Better Code:
It was about tricks and hints to write better readable code. For example, use code review, read a book (Writing Solid Code by Steve Maguire), read other people code, and …etc. Examples for stuff that make the code better is always make the write the constant on the left of the condition, comment beside any else to understand which “if-condition” its related to it, unroll big long “if-condition” statement to multiple short ones, and …etc.

Applications of Dijkstra Maps in Roguelikes:
This talk is about using Dijkstra Maps instead of A* algorithm. A* algorithm (check A* Pathfinding video by Sebastian Lague) is widely used, easy to implement, call on demand, and efficient when points are close but you need to recalculate the path when something changes and its expensive when used by tons of NPCs. Dijkstra is done over the whole map and can be used by all actors but its more expensive than A* and there is no off the shelf implementations. Dijkstra produces a map of numbers where each number equal to the number of steps to reach the goal. In order to find the path just move from the npc location towards the goal in the direction of decreasing number till reaching the goal. Dijkstra can be used for autoexploring, procedural generation for rivers, optimal range for ranged enemies, find the path towards the mouse location, and …etc.

He is the creator of Barony game. Now he is working on Space Punk which an open world metroidvania where the Enemy AI is the challenge. Also he used Chai script language to allow modding.

Procedural Dialect Generation:
That was super interesting talk about a game called Ultima Ratio Regum. The game generate history for NPCs, religions, names, dialects. You have to try it if you are into exploring stories and understanding different cultures.

Quick Talks:
The day ends by couple of very fast talks from the audience themselves talking about different stuff for example using BFXR, Queneau Assembly instead of Markov Chains to generate text, and Procedural Rap Generator.

Tomorrow I will write a wrap up on the whole day too.

Bye Bye


GECCO16: General Video Game Level Generation

Hello everyone,
This is my talk in GECCO16 for our paper “General Video Game Level Generation“.

GVG-LG Framework GECCO.001

Hello everyone, I am Ahmed Khalifa a PhD student at NYU Tandon’s school of engineering and today I am gonna present my paper General Video Game Level Generation

GVG-LG Framework GECCO.002

We want to design a framework that allows people to develop general level generator for any game described in video game description language.

GVG-LG Framework GECCO.003

So what is level generation? It using computer algorithm to design levels for games. People in industry have been using it since very long time. At the beginning the reason due to technical difficulties but now to provide more content to user and it enables a new type of games. The problem with level generation is all the well known generators are designed for a specific game. So it depends on domain knowledge to make sure levels are as perfect as possible. Doing that for every new game seems a little exhaustive so we wanted to have on single algorithm that can be used on multiple games. In order to do that we need a way to describe the game for the generators.

GVG-LG Framework GECCO.004

That’s why we are using Video Game Description Language. It’s a scripting language that enables creating different games super easily. So the script written is for that game which is kinda like dungeon system from legend of Zelda. VGDL is simple u need to define 4 sections, SpriteSet, LevelMapping, InteractiveSet, and TerminationSet. SpriteSet to define game objects such as goal, interactionset to define the interactions between different objects, termination set define how the game should end.

GVG-LG Framework GECCO.005

Here some extra examples that shows that vgdl is able to design different games such as freeway where the user try to reach the exit by crossing the street.

GVG-LG Framework GECCO.006

Snowman which is a puzzle game where the player try to build a snowman at the end position by placing the peices correctly. It’s inspired by a snowman is hard to build a famous Indiegame about creating snowmen and giving them names and hugging them.

GVG-LG Framework GECCO.007

Let’s get back to the framework, we want to create it as simple as possible so each user write a level generator where he gets a vgdl description of the game and he has to return a generated level

GVG-LG Framework GECCO.008

The game description object give the user all the information in vgdl and also give tags to the sprites according to their interaction. For example if object block the player movement and don’t react with anything else, is considered solid, if is the player then tagged as avatar, if it could kill the player then it’s a harmful object, collectible object is stuff that get destroyed

GVG-LG Framework GECCO.009

We designed 3 different level generations and shipped them with the framework, the first one is the random generator where it place all game objects in different location on the map

GVG-LG Framework GECCO.010

The second one is the constructive generator where it tries to use information available and design a good level. So it starts by calculate percentages for each object type then it draws a level layout, then place the avatar at random location then place harmful object in a position proportional to the distance to the avatar, then add goal objects. After that it goes through a post processing where it add more goal objects if needed to satisfy goal constraints.

GVG-LG Framework GECCO.011

The third one is a search based generator where it uses feasible infeasible 2 population genetic algorithm to generate the level. We populated with levels generated from the constructive algorithm to ensure that GA will find better levels. We use one point crossover where two levels are swapped around certain point. While mutations used is create where it creates random object at random tile, delete: deletes all objects from a random position,

GVG-LG Framework GECCO.012

We used several constraints to ensure playability of the levels. First one is cover percentage so we never generate an over populated levels or very sparse levels

GVG-LG Framework GECCO.013

Make sure there is one avatar becz if it’s not there it’s impossible to play the game

GVG-LG Framework GECCO.014

Sprite number constraint where it makes sure there is at least one sprite from each type, to make sure the levels are playable. For example if there is no key u can’t win the level

GVG-LG Framework GECCO.015

Goal Constraint to make sure that all goal conditions are not satisfied becz if they are satisfied then u win when the game starts

GVG-LG Framework GECCO.016

Death constraint is to make sure for the 40 game steps if the player did nothing he won’t die becz if the player dies at the beginning it’s unplayable game becz humans are not as fast as expected

GVG-LG Framework GECCO.017

Last one is win constraint and solution length is make sure the levels are winnable and not straight forward win, the best agent takes amount of steps before finishing.

GVG-LG Framework GECCO.018

These are all the constraints, about the fitness, it consists of two parts. The first part is score difference fitness inspired from Nielsen work. He found that the difference in score between the best agent (Explorer) and his worst agent (DoNothing) is big in good designed levels compared to random generated ones. We used the difference between our best agent and worst agent for that.

GVG-LG Framework GECCO.019

The second fitness is unique rule fitness, we noticed by analyzing all games that agents playing good designed levels encounter more unique intersection rules than random designed levels.

GVG-LG Framework GECCO.020

In order to verify that we did a user study, where the user plays two levels from a certain game and choose which is better. We used 3 different games without including puzzle games as there is no agent excels in playing puzzle games so far.

GVG-LG Framework GECCO.021

The results shows that users prefered search based levels over constructive which was expected while they prefered random over constructive without huge significance.

GVG-LG Framework GECCO.022

One of the main reason we think is people can’t differentiate between constructive and random but random always ensure all sprites there which make its levels playable while constructive although it looks better sometimes it misses a key or something.

GVG-LG Framework GECCO.023

We ran a level generation competition one week ago, we had 4 competitors (1 based on cellular automata, 3 based on search based approaches)

GVG-LG Framework GECCO24.001

It was simple because the cellular automata didn’t put tons of objects in the level which made it somehow playable while all the others overloaded the levels with tons of object so players die insatiately

GVG-LG Framework GECCO.025

Our future work, is enhancing search based generator to get better designed levels, enhance the framework based on competitors feedback, rerunning the competition again and encouraging more people to participate.

GVG-LG Framework GECCO.026

That’s it, Thanks.

That’s it for now.
Bye, Bye


IJCAI16 Talk: Modifying MCTS for Humanlike Video Game Playing

Hello everyone,
Ages since last post 😀 on Thursday July 14th I gave a talk about my paper “Modifying MCTS for Humanlike Video Game Playing” with Aaron Isaksen, Andy Nealen, and Julian Togelius at IJCAI16. Thanks to Aaron, he captured a video of my talk. Here is it:

Also we did a poster for the conference which looked amazing. Here is the poster:

Humanlike MCTS Poster.001

If the video is not clear, I am posting the slides here with my comments:

Humanlike MCTS New.001

Hello everyone, I am Ahmed Khalifa, PhD student at NYU Tandon’s School of Engineering. Today I am gonna talk about my paper “Modifying MCTS for Humanlike Video Game Playing”.

Humanlike MCTS New.002

We are trying to modifying Monte Carlo Tree Search algorithm to play different games like human player. We are using General Video Game Playing Framework to test our algorithm.

Humanlike MCTS New.003

Why do we need that? One important reason is create humanlike NPCs. One of the reason people play Multiplayer games is the lack of realistic NPCs to play with or against. Also evaluating levels and games for AI-assisted tools. For example if you gave these two levels to a human, he will pick the one on the left as its playable by human while the one of the right is super difficult, it might even be impossible to be played.

Humanlike MCTS New.004

Before we start, whats general video game playing which we are using its framework. Its a competition for general intelligence where competitors create different agents that plays different unseen games. These games are written in a scripting language called Video Game Description Language. Every 40msec the agent should select one of the available actions. Like up, down, left, right, use button, nil which is do nothing. A game play trace is a sequence of actions.

Humanlike MCTS New.005

Here are two videos that shows the difference between human player and MCTS agent. On the left you can see humans tends to go towards their goal and only do actions when necessary (for example only attack when monster is near). While MCTS agent on the right is stuck in the upper left corner moving in walls, attacking the air and walls.

Humanlike MCTS New.006

By analyzing the play traces for both human players and MCTS agent on different games. We found that humans tends to repeat the same action multiple times before changing. In the first graph it shows human have tendency to repeat the same action twice by 50%. Also humans tends to use more NILs and tends to repeat it more during the play trace. While in the third graph it shows the MCTS have a higher tendency to change actions more often than humans. Humans 70% of the time don’t change their action.

Humanlike MCTS New.007

In order to achieve similar play traces, we need to modify MCTS. These are the main 4 steps for MCTS.

Humanlike MCTS New.008

We tried to modify each step on its own but none of them have a big change in the distribution except for the selection step.

Humanlike MCTS New.009

Selection step depends on UCB equation to select the next node.

Humanlike MCTS New.010

UCB equation consists of two terms, exploitation term and exploration term. The exploitation term bias the selection to select the best node while the exploration term push MCTS to explore less visited nodes.

Humanlike MCTS New.011

We modified the equation by adding a new bonus term which consists of 3 parts:
Human Modeling
Trivial Branch Pruning
Map Exploration Bonus
Also we modified the Exploitation term with a MixMax term.
We are going to explain all these terms in details in the upcoming slides.

Humanlike MCTS New.012

We added a bonus value that shift the MCTS distribution to be similar to human distribution. As you see from the video the agent tends to repeat the same action and do more NILs with lower action to new action frequency. But as we see, it is still stupid, stuck in the corner, attacking air, moving into walls.

Humanlike MCTS New.013

Thats why we added the second term which avoids selecting stupid nodes (like attacking walls and moving into walls) As we see the agent stopped attacking the air and whenever it get stuck in a wall, it changes the action or apply nil. But its still stuck in the corner.

Humanlike MCTS New.014

So we added a bonus term that reward nodes that have less visited positions on the map. As we can see the agent now go every where and explore. But as we see the agent is coward, it avoids attacking the spiders.

Humanlike MCTS New.015

So we used MixMax term instead of the exploitation term which use the mixture between the best value and the average value of the node instead of the average value only. As we can see the agent become courageous and moves towards the enemies and kill them.

Humanlike MCTS New.016

Analyzing the play traces after all these modifications. our BoT (Bag of Tricks) algorithm tends to be more similar to human compared to MCTS in action repetition, nil repetition. Also having less action to new action changes.

Humanlike MCTS New.017

In order to verify these results we conducted a user study. In the study, each user watch two videos and he was to specify which is more likely to be human and why?

Humanlike MCTS New.018

From the study our BoT algorithm was more human than MCTS but still not as good as humans, except for PacMan where deceived the humans by 60%.

Humanlike MCTS New.019

When we analyzed the human comments we found that the main reason for recognizing agent are the same as we stuff we tries to solve. Jitterness (changing directions very quickly), Useless moves (attacking walls, moving into them), No long term planning (stuck in one of the corners), too fast reaction time, over confidence (killing all enemies and become over achiever)

Humanlike MCTS New.020

Thanks for listening.

That’s everything for now.


Super-W-Hack! Incubator Pitch

Hello everyone,
Today me and Gabriella gave a talk about Super-W-Hack for the incubator program. I felt it would be nice to share the talk with you people.

Hello everyone, I am Ahmed and this is Gabriella. We are PhD students at the game innovation lab here at NYU. We are going to talk today about our game Super-W-Hack!

Super-W-Hack! is a roguelike game with retro aesthetics as a tribute to the roguelike genre. Our game takes the procedural content generation (PCG) up to the next level. We use it to generate everything in the game.

Levels are procedurally generated, names, layouts, enemy distributions.

Player weapons: weapon pattern, names, sounds.

Even bosses are procedurally generated.

All game main features are done. But since our research is in PCG and we know how amazing it can generate stuff so we want to embrace it more. Generate the main character (his back story, why he is going to the dungeon). Generate variable weapons like teleportation, mines, bombs. Also our game still needs music. Since we have lots of PCG so we need lots of testing to make sure it works correctly.

We plan to finish the game and release it by the end of the year over Desktop such as Steam and Itch.io as Desktop has always been the land of roguelikes and the hugest fanbase. Since the game has simple controls and small decisions to take at each step. We believe it will do well on Mobile markets such as App Store and Google Play. We are going to send our game to all the major events such as IGF, IndieCade, and PAX. We believe with the help of the incubator we will be able to reach all these goals.

Thanks everyone for listening any questions.

Level Layout
(Then we played this video in the background while taking questions)

We got couple of questions and concern about the play session time, ensuring the game is playable and interesting to the player, and what’s a roguelike.

I just wanted to share my talk about the game and here is the link (www.akhalifa.com/blog/super-w-hack/) to the alpha version if anyone wanna try it. In the current state its still a little hard to understand in the beginning but as soon as you understand it. It so intuitive and interesting to be played.



Video Game Description Language (VGDL) & General Video Game Playing Framework (GVG-AI)

This post is a presentation I did couple of weeks ago at Game Innovation Lab (GIL). It supposes to help people at the lab to understand VGDL and GVG-AI framework. I think that if we want VGDL to evolve, more people should know about it and use it. This evolution won’t happen without showing to people the power of GVG-AI Framework and VGDL. There is lots of development happening to improve the framework and language and making it more accessible to people (creating an Interactive Visual Editor with computer assist). The following paragraphs are my slides with a description for each slide.



General Video Game Playing (GVGP) is creating an AI agent that can play different video games (not a certain game) efficiently. The hard constrain on these agents that they need to have a response time in about 40 msec.


Video Game Description Language (VGDL) was invented to help to prototype different games quickly for the GVGP and have a common framework to work with. VGDL was initially written in python but now ported for java (to be faster). The language is tree structured like xml files. It supports grid physics like old arcade games (PcMan, Space Invaders, …etc). It is human readable and easy to prototype lots of different games using it (PcMan, Frogs, Space Invaders, Sokoban, …etc)


The current drawbacks of the VGDL: It hasn’t a visual editor, It hasn’t a good online documentation for the language, It has limited physics (no continuous physics to prototype games like Cut the Rope), and It has limited events (all game events are restricted with collision events between objects). Right now people are working to improve these drawbacks and make the VGDL more accessible to more people. Check the current draft of documentation and may be you could help improve the writing and improve it (link).


In order to write a game using VGDL, you need to provide two files (Game Description File and Level Description File). The Game Description File describes the whole game (What are the game sprites? What’s the termination condition? How do sprites interact? …etc). The Level Description File describes the level layout (Sprite arrangement).


Let’s take an example. This game is called WaterGame. It’s super simple game where you have to move the avatar towards the exit. The problem is the path is always blocked by water. If avatar touches water, it will die. In order to pass the water, the avatar should push boxes over it (Box destroys the water, you can think it floats).


That’s the Game Description File for that game. It is divided into 4 sections (SpriteSet, TerminationSet, InteractionSet, and LevelMapping).

  • SpriteSet: explains all the game sprites (their type, their rendered image). For example “avatar” is defined as of type “MovingAvatar” which means it can move in all 4 directions and it has an image called “avatar” (All images must be in sprites folder).
  • TerminationSet: explains how should the game ends? For example “SpriteCounter stype=avatar limit=0 win=False” means if the number of “avatar” sprites are equal to zero, you lose the game.
  • InteractionSet: explains the result of collision between sprites. For example the first collision says if “avatar” collides with “wall”, the “avatar” should “stepBack” which means undo its last movement.
  • LevelMapping: just helps the VGDL engine to parse the Level Description File. The VGDL engine just replace each character with the corresponding sprites.