Skip to content
/ QECO Public

A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning (DRL) for Mobile Edge Computing (MEC) | This algorithm captures the dynamics of the MEC environment by integrating the Dueling Double Deep Q-Network (D3QN) model with Long Short-Term Memory (LSTM) networks.

License

Notifications You must be signed in to change notification settings

ImanRHT/QECO

Repository files navigation

QECO

A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing

GitHub release (latest) DOI GitHub repo size GitHub stars GitHub forks GitHub issues GitHub license

This repository contains the Python code for reproducing the decentralized QECO (QoE-Oriented Computation Offloading) algorithm, designed for Mobile Edge Computing (MEC) systems.

Citation

I. Rahmati, H. Shahmansouri, and A. Movaghar, "QECO: A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing".

@article{rahmati2024qeco,
  title={QECO: A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning for Mobile Edge Computing},
  author={Rahmati, Iman and Shah-Mansouri, Hamed and Movaghar, Ali},
  journal={arXiv preprint arXiv:2311.02525},
  url={https://arxiv.org/abs/2311.02525},
  year={2024}
}

Overview

QECO is designed to balance and prioritize QoE factors based on individual mobile device requirements while considering the dynamic workloads at the edge nodes. The QECO algorithm captures the dynamics of the MEC environment by integrating the Dueling Double Deep Q-Network (D3QN) model with Long Short-Term Memory (LSTM) networks. This algorithm address the QoE maximization problem by efficiently utilizing resources from both MDs and ENs.

  • D3QN: By integrating both double Q-learning and dueling network architectures, D3QN overcomes overestimation bias in action-value predictions and accurately identifies the relative importance of states and actions. This improves the model’s ability to make accurate predictions, providing a foundation for enhanced offloading strategies.

  • LSTM: Incorporating LSTM networks allows the model to continuously estimate dynamic work- loads at edge servers. This is crucial for dealing with limited global information and adapting to the uncertain MEC environment with multiple MDs and ENs. By predicting the future workload of edge servers, MDs can effectively adjust their offloading strategies to achieve higher QoE.

D3QN architecture

Contents

  • main.py: The main code, including training and testing structures, implemented using Tensorflow 1.x.
  • MEC_Env.py: Contains the code for the mobile edge computing environment.
  • D3QN.py: The code for QECO netwok model, implemented using Tensorflow 1.x.
  • Config.py: Configuration file for MEC entities and neural network setup.

Quick Start

  1. Clone the repository:
   git clone https://github.com/ImanRHT/QECO.git
   cd QECO
  1. Configure the MEC environment in Config.py.

  2. Make sure you have the required packages listed in the requirements.txt file installed to ensure the project functions correctly.

  3. Run the training script*:

   python main.py

Convergence

Performance_Chart

Future Directions

  • Addressing single-agent non-stationarity issues by leveraging multi-agent DRL.
  • Accelerating the learning of optimal offloading policies by taking advantage of Federated Learning techniques in the training process. This will allow MDs to collectively contribute to improving the offloading model and enable continuous learning when new MDs join the network.
  • Addressing partially observable environment issues by designing a decentralized Partially Observable Markov Decision Process (Dec-POMDP).
  • Extending the Task Models by considering interdependencies among tasks. This can be achieved by incorporating a Task Call Graph Representation.
  • Implementation of the D3QN algorithm using PyTorch, focusing on efficient parallelization and enhanced model stability.

Contributing

We welcome contributions! Here’s how you can get involved:

  1. Fork the repository: Create your own copy of the project.
  2. Clone Your Fork:
  git clone https://github.com/<your-username>/<repo-name>.git  
  cd <repo-name>
  1. Create a new branch: Name your branch to reflect the changes you're making.
  git checkout -b feature/<add-future-direction-support>
  1. Commit your changes: Write clear and concise commit messages.
  git add * 
  git commit -a -m "<add-future-direction-support>"  
  1. Push your branch:
  git push origin feature/<add-future-direction-support>
  1. Open a pull request: Navigate to the repository and submit your pull request. Provide a detailed description of your work.

For bug reports or feature requests, open a GitHub issue here.

About Authors

  • Iman Rahmati: Research Assistant in the Computer Science and Engineering Department at SUT.
  • Hamed Shah-Mansouri: Assistant Professor in the Electrical Engineering Department at SUT.
  • Ali Movaghar: Professor in the Computer Science and Engineering Department at SUT.

Primary References

About

A QoE-Oriented Computation Offloading Algorithm based on Deep Reinforcement Learning (DRL) for Mobile Edge Computing (MEC) | This algorithm captures the dynamics of the MEC environment by integrating the Dueling Double Deep Q-Network (D3QN) model with Long Short-Term Memory (LSTM) networks.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages