Skip to content

Network Slice Resource Allocation Algorithm using DRL and Distillation

Notifications You must be signed in to change notification settings

CodeAlpha7/DRL-Resource-Allocation

Repository files navigation

RARE

Source Codes for: Resource Allocation for network slicing using Reinforcement Learning

If you want to reset state information during new simulation run, try reinitiating your own saved_buffer. Try with empty buffer for new results.

Setup

This setup is for Linux and Mac OS. For Windows, this link might be useful.

Requirements

  • Conda

Create a virtual environment with conda requirements

conda create --name cs8803 python=3.9.18

Activate the virtual environment

conda activate cs8803

and to deactivate:

conda deactivate

Install additional PIP dependencies

pip install -r requirements.txt

Usage

Configure the desired parameters in the src/parameters.py file, when running test/visualization tools these parameters need to match the parameters used during traning otherwise the results might not be correct.

Train slices

python src/train_all_slices.py [td3 | ddpg]

e.g.

python src/train_all_slices.py td3

Run python src/train_all_slices.py -h for more options.

You can also add a custom experiment name with --name <name>. e.g.

python src/train_all_slices.py td3 --name my_experiment

Results

Once the training is done, the results will be saved in models/ and will have the following naming convention:

<algo>_<timestamp> if the name is not specified, or

<algo>_<timestamp>_<name> if the name is specified.

Inside such folder you will also find a parameters.txt file that lists the parameters used.

Obtain model allocation and utility

python src/Run_ADMM_GAP.py.py [td3 | ddpg] [model-directory]

e.g.

python src/Run_ADMM_GAP.py td3 '/Users/guestx/Desktop/8803-SMR/models/td3_2023-11-03_06:44:55_traces-320epochs'

This will compute the allocation and utility of the selected model along the ADMM and static algorithm. The results will be printed out and stored in Run_ADMM_GAP_trace.txt file in the model directory.

Visualize the traning trace

To visualize the traning trace run:

python src/viz_training_trace.py [model-directory]

e.g.

python src/viz_training_trace.py '/Users/guestx/Desktop/8803-SMR/models/td3_2023-11-03_06:44:55_traces-320epochs'

This command will generate plots regarding actor loss, critic loss, fps, and reward mean in the plots/ directory of the model.

Visualize the allocation

python src/viz_allocation.py.py [model-directory]

This will generate the allocation plot which will be located in the plots/ directory of the model.

Run Test Agent Evaluation

This script evaluates the system under a random assignment of the dual and auxiliary variables. It saves a CDF of the sum utility and also can printout the runtimes of the models and collect data for training the GBDT model if --collection is used.

python tst_agent.py --td3 [td3-model-dir] --ddpg [ddpg-model-dir] --gbdt [gbdt-model-dir] `--collection`

e.g.

python src/tst_agent.py --td3 models/td3_best_rho4 --ddpg models/ddpg_good_200 --gbdt models/gbdt_model

Run Test Agent Visualization

This script creates three dynamic graphs of the system under a random assignment of the dual and auxiliary variables. The graphs show the sum utility at that iteration, CDF of sum utility, and the user allocations.

python tst_agent_viz.py --td3 [td3-model-dir] --ddpg [ddpg-model-dir] --gbdt [gbdt-model-dir]

e.g.

python src/tst_agent_viz.py --td3 models/td3_best_rho4 --ddpg models/ddpg_good_200 --gbdt models/gbdt_model

Distillation dataset creation

To create a dataset for training the GBDT model with our environment run tst_agent.py with the --collection option like

python tst_agent.py --ddpg [ddpg-model-dir] --collection

e.g.

python src/tst_agent.py --ddpg models/ddpg_good_200 --collection

Distillation training

To train a GBDT model, use gbdt.py, manually edit the path to your dataset file and the row to drop adn where to split between state and action space for X and y.

python gbdt.py

Distillation with edge computing using GBDT

For disitillation usage and instructions please refer to the dedicated readme located at Distillation/Readme.md

About

Network Slice Resource Allocation Algorithm using DRL and Distillation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published