This repository is the official implementation of TreeDQN: Learning variable selection rules for combinatorial optimization problems.
# pull docker image
docker pull idono/rlbnb:release
# run docker container
docker run -dit --gpus all --shm-size=10g --name rlbnb idono/rlbnb:release /bin/bash
# enter docker container
docker exec -it rlbnb /bin/bash
# work with TreeDQN
git clone https://github.com/dmitrySorokin/treedqn.git
conda activate bb
# work with baseline rl2branch
git clone https://github.com/lascavana/rl2branch.git
conda activate rl2branch
To train the RL agent, run this commands:
# generate validation data
python gen_instances.py --config-name <cfg from configs>
# run training
python main.py --config-name <cfg from configs>
To train the IL agent, run this commands:
# generate training data
python gen_imitation_data.py --config-name <cfg from configs>
# run training
python il_train.py --config-name <cfg from configs>
To evaluate the agent, run:
python eval.py --config-name <cfg from configs> agent.name={agent_name}
- agent_name: strong, dqn, il, random
- results will be saved in results/{task_name}/{agent_name}.csv
Pretrained weights for IL, TreeDQN and REINFORCE agents are in models/ dir.
To plot results, run:
python plot.py results/<task name>
Geometric mean of tree sizes (lower is better):
Model | Comb.Auct | Set Cover | Max.Ind.Set. | Facility Loc. | Mult.Knap |
---|---|---|---|---|---|
Strong Branching | 48 |
43 |
40 |
294 |
700 |
IL | 56 |
53 |
42 |
323 |
670 |
TreeDQN | 58 |
56 |
42 |
324 |
290 |
FMCTS | 65 |
76 |
96 |
499 |
299 |
tmdp+DFS | 93 |
204 |
88 |
521 |
308 |
Submit Github issue if you have any questions or want to contribute.