Recent years witnessed a remarkable increase in the availability of data and computing resources in comm-unication networks. This contributed to the rise of data-driven over model-driven algorithms for network automation. This paper investigates a Minimization of Drive Tests (MDT)-driven Deep Reinforcement Learning (DRL) algorithm to optimize coverage and capacity by tuning antennas tilts on a cluster of cells from TIM's cellular network. We jointly utilize MDT data, electromagnetic simulations, and network Key Performance indicators (KPIs) to define a simulated network environment for the training of a Deep Q-Network (DQN) agent. Some tweaks have been introduced to the classical DQN formulation to improve the agent's sample efficiency, stability and performance. In particular, a custom exploration policy is designed to introduce soft constraints at training time. Results show that the proposed algorithm outperforms baseline approaches like DQN and best-first search in terms of long-term reward and sample efficiency. Our results indicate that MDT -driven approaches constitute a valuable tool for autonomous coverage and capacity optimization of mobile radio networks.

Marco Skocaj, Lorenzo M. Amorosa, Giorgio Ghinamo, Giuliano Muratore, Davide Micheli, Flavio Zabini, et al. (2022). Cellular network capacity and coverage enhancement with MDT data and Deep Reinforcement Learning. COMPUTER COMMUNICATIONS, 195, 403-415 [10.1016/j.comcom.2022.09.005].

Cellular network capacity and coverage enhancement with MDT data and Deep Reinforcement Learning

Marco Skocaj
Primo
;
Lorenzo M. Amorosa
Secondo
;
Flavio Zabini
Penultimo
;
Roberto Verdone
Ultimo
2022

Abstract

Recent years witnessed a remarkable increase in the availability of data and computing resources in comm-unication networks. This contributed to the rise of data-driven over model-driven algorithms for network automation. This paper investigates a Minimization of Drive Tests (MDT)-driven Deep Reinforcement Learning (DRL) algorithm to optimize coverage and capacity by tuning antennas tilts on a cluster of cells from TIM's cellular network. We jointly utilize MDT data, electromagnetic simulations, and network Key Performance indicators (KPIs) to define a simulated network environment for the training of a Deep Q-Network (DQN) agent. Some tweaks have been introduced to the classical DQN formulation to improve the agent's sample efficiency, stability and performance. In particular, a custom exploration policy is designed to introduce soft constraints at training time. Results show that the proposed algorithm outperforms baseline approaches like DQN and best-first search in terms of long-term reward and sample efficiency. Our results indicate that MDT -driven approaches constitute a valuable tool for autonomous coverage and capacity optimization of mobile radio networks.
2022
Marco Skocaj, Lorenzo M. Amorosa, Giorgio Ghinamo, Giuliano Muratore, Davide Micheli, Flavio Zabini, et al. (2022). Cellular network capacity and coverage enhancement with MDT data and Deep Reinforcement Learning. COMPUTER COMMUNICATIONS, 195, 403-415 [10.1016/j.comcom.2022.09.005].
Marco Skocaj; Lorenzo M. Amorosa; Giorgio Ghinamo; Giuliano Muratore; Davide Micheli; Flavio Zabini; Roberto Verdone
File in questo prodotto:
File Dimensione Formato  
cellular network post print.pdf

Open Access dal 25/09/2024

Tipo: Postprint
Licenza: Creative commons
Dimensione 5.7 MB
Formato Adobe PDF
5.7 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/903195
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 8
social impact