Hierarchical Deep Reinforcement Learning based Dynamic RAN Slicing for 5G V2X

Abstract

Radio Access Network (RAN) slicing is getting increasing attention as a resource allocation technique for satisfying diverse Quality-of-Service (QoS) requirements in 5G vehicular networks. Hierarchical Reinforcement Learning (HRL), such as hierarchical-DQN (h-DQN), is a promising slice management approach that decomposes performance constraints into a subroutine hierarchy and uses Deep Reinforcement Learning (DRL) at different temporal scales for online-learning an optimal policy of bandwidth allocation. In this paper, we tackle RAN slicing problem in 5G vehicle-to-everything (V2X) communications and present h-DQN based Soft Slicing (HSS) method for model-free opportunistic slice management. HSS consists of a multi-controller learning framework where a high-level meta-controller takes state input for determining a subgoal and a low-level controller decides on the action based on the given subgoal and the state. We compare performance of HSS with model-free and model-based Reinforcement Learning (RL) methods in terms of Age of Information (AoI), service delay and network throughput. Our results show that proposed scheme improves sample-efficiency and outperforms traditional and RLbased V2X RAN slice management methods in terms of network utility maximization.

@INPROCEEDINGS{9685588,
  author={Kaytaz, Umuralp and Sivrikaya, Fikret and Albayrak, Sahin},
  booktitle={2021 IEEE Global Communications Conference (GLOBECOM)}, 
  title={Hierarchical Deep Reinforcement Learning based Dynamic RAN Slicing for 5G V2X}, 
  year={2021},
  volume={},
  number={},
  pages={1-6},
  doi={10.1109/GLOBECOM46510.2021.9685588}}
Authors:
Umuralp Kaytaz, Ph.D. Fikret Sivrikaya, Sahin Albayrak
Category:
Conference Paper
Year:
2021
Location:
IEEE Global Communications Conference (GLOBECOM) 2021
Link: