iLLM-TSC: Integration reinforcement learning and large language model for traffic signal control policy improvement
Paper | Simulation |
Case3.mp4
We propose a framework that utilizes LLM to support RL models. This framework refines RL decisions based on real-world contexts and provides reasonable actions when RL agents make erroneous decisions.
- Case1: LLM think that the action taken by the RL Agent was unreasonable and gave a reasonable explanation and recommended actions.
- Case 2: LLM considers that the movement made by the RL Agent is not the movement with the highest current mean occupancy but it is reasonable, after which LLM gives an explanation and recommendation.
- Case 3: An ambulance needs to pass through the intersection, but the RL Agent does not take into account that the ambulance needs to be prioritized. LLM modifies the RL Agent’s action to prioritize the ambulance to pass through the intersection.
Install TransSimHub
The simulation environment we used is TransSimHub, which is based on SUMO and can be used for TSC, V2X and UAM simulation. More information is available via docs.
You can install TransSimHub by cloning the GitHub repository. Follow these steps:
git clone https://github.com/Traffic-Alpha/TransSimHub.git
cd TransSimHub
pip install -e .
After the installation is complete, you can use the following Python command to check if TransSimHub is installed and view its version:
import tshub
print(tshub.__version__)
You can install HARLA by cloning the GitHub repository. Follow these steps:
git clone https://github.com/Traffic-Alpha/TSC-HARLA
cd TSC-HARLA
pip install -r requirements.txt
After completing the above Install steps
, you can use this program locally.
The first thing you need to do is train a RL model. You can do it with the following code:
cd TSC-HARLA
python sb3_ppo.py
The training results are shown in the figure, and model weight has been uploaded in models.
The effect of the RL model can be tested with the following code:
python eval_rl_agent.py
Before you can use LLM, you need to have your own KEY and fill it in the utils/config.yaml
.
OPENAI_PROXY:
OPENAI_API_KEY:
The entire framework can be used with the following code.
python rl_llm_tsc.py
Evaluation Rule: To make fair evaluation and comparison among different models, make sure you use the same LLM evaluation model (we use GPT4) for all the models you want to evaluate. Using a different scoring model or API updating might lead to different results.
All assets and code in this repository are under the Apache 2.0 license unless specified otherwise. The language data is under CC BY-NC-SA 4.0. Other datasets (including nuScenes) inherit their own distribution licenses. Please consider citing our project if it helps your research.
@article{pang2024illm,
title={iLLM-TSC: Integration reinforcement learning and large language model for traffic signal control policy improvement},
author={Pang, Aoyu and Wang, Maonan and Pun, Man-On and Chen, Chung Shue and Xiong, Xi},
journal={arXiv preprint arXiv:2407.06025},
year={2024}
}
@article{wang2024llm,
title={LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments},
author={Wang, Maonan and Pang, Aoyu and Kan, Yuheng and Pun, Man-On and Chen, Chung Shue and Huang, Bo},
journal={arXiv preprint arXiv:2403.08337},
year={2024}
}
iLLM-TSC just explores the combination of RL and LLM, more work will be updated in TSC-LLM, welcome to star!
- Yufei Teng: Thanks for editing the video.
- Thank you to everyone who pays attention to our work. Hope our work can help you.