Skip to content

giuseppeboezio/flappybirRL

Repository files navigation

FlappyBirRL icon 

This repository contains the code and the report for the project of the exam Autonomous and Adaptive Systems of the Master's degree in Artificial Intelligence, University of Bologna.

In this project the A2C reinforcement learning method is used to learn to play Flappy Bird mobile game.

More details can be found in the report.

Game

This is part of a game played by an agent trained using A2C algorithm

game gif

Structure

The repository is composed as follows:

  • agents: agents and networks
  • assets: resources for the README.md
  • envs: customized version of the original environment
  • evaluation: functions to evaluate pretrained models and compare them using a boxplot
  • training: A2C algorithm, data, plots and utility related to training
  • constants.py: constants of the project
  • play.py: run a game with an agent
  • plot_utils.py: functions to generate plots
  • report.pdf: final report of the project
  • train.py: train an agent
  • utils.py: utility functions

Requirements

Python 3.7 must be used to be able to run correctly the project

OpenAI Gym Environment

The environment has been partially developed reusing the implementation provided by Talendar. To install the library follow these steps:

  1. Clone the repository in the root of this project folder
git clone https://github.com/Talendar/flappy-bird-gym.git
  1. Copy the folder envs of the current repository to flappy-bird-gym/flappy_bird_gym replacing the folder with the same name. This operation allows to replace the original implementation of the environment with the new customized one.
  2. Install the library flappy-bird-gym directly from a local folder.
pip install -e flappy-bird-gym

Now flappy-bird-gym should be recognized as package, it is possible to check it with Python

import flappy_bird_gym

Packages

Because of flappy-bird-gym requirements and tensorflow compatibility it is strongly suggested to use these packages with the corresponding versions:

  • numpy version=1.20
  • tensorflow version=2.8

Run

To train a specific version of an agent it is possible to use the script train.py. Run:

py train.py <agent> <num_episodes> <num_processes> <discount_rate> <learning_rate>

where:

  • agent is the type of agent (base version, cnn version or base version trained using entropy regularization)
  • num_episodes is the number of game to train the agent
  • num_processes is the number of parallel processes to train the agent
  • discount_rate is the value used to obtain expected return
  • learning_rate is the rate used by the optimizer to minimize the loss function

To play a game with a specific agent the script play.py can be used as follows:

py play.py <agent>

where agent is the name of a pretrained agent

About

RL agent to play Flappy Bird

Topics

Resources

License

Stars

Watchers

Forks

Languages