Skip to content

A standard format for offline reinforcement learning datasets, with popular reference datasets and related utilities

License

Notifications You must be signed in to change notification settings

Farama-Foundation/Minari

Repository files navigation

pre-commit Code style: black

Minari is a Python library for conducting research in offline reinforcement learning, akin to an offline version of Gymnasium or an offline RL version of HuggingFace's datasets library.

The documentation website is at minari.farama.org. We also have a public discord server (which we use for Q&A and to coordinate development work) that you can join here: https://discord.gg/bnJ6kubTg6.

Installation

To install Minari from PyPI:

pip install minari

This will install the minimum required dependencies. Additional dependencies will be prompted for installation based on your use case. To install all dependencies at once, use:

pip install "minari[all]"

If you'd like to start testing or contribute to Minari please install this project from source with:

git clone https://github.com/Farama-Foundation/Minari.git
cd Minari
pip install -e ".[all]"

Command Line API

To check available remote datasets:

minari list remote

To download a dataset:

minari download D4RL/door/human-v2

To check available local datasets:

minari list local

To show the details of a dataset:

minari show D4RL/door/human-v2

For the list of commands:

minari --help

Basic Usage

Reading a Dataset

import minari

dataset = minari.load_dataset("D4RL/door/human-v2")

for episode_data in dataset.iterate_episodes():
    observations = episode_data.observations
    actions = episode_data.actions
    rewards = episode_data.rewards
    terminations = episode_data.terminations
    truncations = episode_data.truncations
    infos = episode_data.infos
    ...

Writing a Dataset

import minari
import gymnasium as gym
from minari import DataCollector


env = gym.make('FrozenLake-v1')
env = DataCollector(env)

for _ in range(100):
    env.reset()
    done = False
    while not done:
        action = env.action_space.sample()  # <- use your policy here
        obs, rew, terminated, truncated, info = env.step(action)
        done = terminated or truncated

dataset = env.create_dataset("frozenlake/test-v0")

For other examples, see Basic Usage. For a complete tutorial on how to create new datasets using Minari, see our Pointmaze D4RL Dataset tutorial, which re-creates the Maze2D datasets from D4RL.

Training Libraries Integrating Minari

Citation

If you use Minari, please consider citing it:

@software{minari,
	author = {Younis, Omar G. and Perez-Vicente, Rodrigo and Balis, John U. and Dudley, Will and Davey, Alex and Terry, Jordan K},
	doi = {10.5281/zenodo.13767625},
	month = sep,
	publisher = {Zenodo},
	title = {Minari},
	url = {https://doi.org/10.5281/zenodo.13767625},
	version = {0.5.0},
	year = 2024,
	bdsk-url-1 = {https://doi.org/10.5281/zenodo.13767625}
}

Minari is a shortening of Minarai, the Japanese word for "learning by observation".