This repository contains a collection of modules for implementing Multi-Objective Reinforcement Learning (MORL) algorithm specifically designed for multi-objective power grid topology control. This framework bases on morl_baselines for the MORL part and on Grid2Op for the power system environment.
- Installation
- Modules
- ols_DOL_exe.py
- MO_PPO.py
- GridRewards.py
- CustomGymEnv.py
- EnvSetup.py
- MO_PPO_train_utils.py
- MORL_analysis_utils.py
- Grid2op_eval.py
- env_start_up.py
- Usage
- Contributing
- License
The modules are divided into source and scripts. Source modules are sperated into environment, agent, wrapper and utils.
Starts and proceeds the experiments including DOL and MOPPO Training and Evaluation.
Contains the implementation of the Multi-Objective Proximal Policy Optimization (MO-PPO) algorithm, based on the MORL_baseline package
Contains the implementation for calculating grid-based rewards.
- GridRewards: Calculates rewards based on a grid of metrics.
Defines a custom Gym environment for MORL experiments.
Utility for setting up the custom Gym environment.
Contains utility functions for training MO-PPO.
Contains utility functions for analyzing MORL experiments.
Contains the analysis script and plotting for the case studies
Contains the evaluation script for Grid2Op environment.
Sets up the environment for power grid topology control experiments.
Contributions are welcome Please create an issue or submit a pull request for any changes.