It is expected that future 6G vehicular networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations, namely Unmanned Aerial Base Stations (UABSs), in order to provide a wide range of services that current terrestrial networks cannot manage. Vehicles may exploit strong links with the UABS, enabling applications such as advanced driving and extended sensing. In this context, if vehicular users are satisfied with an appropriate Quality of Experience (QoE), they are able to upload a given amount of data for a given time window, continuously. To allow this, an efficient path planning is fundamental. This paper presents a Deep Reinforcement Learning (DRL)-based solution, where a novel reward function is proposed with the aim of offering a continuous service to vehicles. Results are presented in terms of the percentage of satisfied users and both, a continuous action space as well as a discrete action space, are considered exploiting two different DRL algorithms (i.e., Double Dueling Deep Q Network (3DQN) and Deep Deterministic Policy Gradient (DDPG)) in order to compare the two and select the best one according to the described scenario of interest.

Spampinato L., Tarozzi A., Buratti C., Marini R. (2023). DRL Path Planning for UAV-Aided V2X Networks: Comparing Discrete to Continuous Action Spaces. Institute of Electrical and Electronics Engineers Inc. [10.1109/ICASSP49357.2023.10095817].

DRL Path Planning for UAV-Aided V2X Networks: Comparing Discrete to Continuous Action Spaces

Spampinato L.;Tarozzi A.;Buratti C.;Marini R.
2023

Abstract

It is expected that future 6G vehicular networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations, namely Unmanned Aerial Base Stations (UABSs), in order to provide a wide range of services that current terrestrial networks cannot manage. Vehicles may exploit strong links with the UABS, enabling applications such as advanced driving and extended sensing. In this context, if vehicular users are satisfied with an appropriate Quality of Experience (QoE), they are able to upload a given amount of data for a given time window, continuously. To allow this, an efficient path planning is fundamental. This paper presents a Deep Reinforcement Learning (DRL)-based solution, where a novel reward function is proposed with the aim of offering a continuous service to vehicles. Results are presented in terms of the percentage of satisfied users and both, a continuous action space as well as a discrete action space, are considered exploiting two different DRL algorithms (i.e., Double Dueling Deep Q Network (3DQN) and Deep Deterministic Policy Gradient (DDPG)) in order to compare the two and select the best one according to the described scenario of interest.
2023
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
1
5
Spampinato L., Tarozzi A., Buratti C., Marini R. (2023). DRL Path Planning for UAV-Aided V2X Networks: Comparing Discrete to Continuous Action Spaces. Institute of Electrical and Electronics Engineers Inc. [10.1109/ICASSP49357.2023.10095817].
Spampinato L.; Tarozzi A.; Buratti C.; Marini R.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/960664
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact