It is widely expected that 6G networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations (BSs) to provide a wide range of services that current networks cannot handle. One of the major trends deals with Vehicle-To-Everything (V2X) communications, where vehicles must be connected to the network to offer applications such as advanced driving and extended sensing. In this context, vehicles could deeply count on flying BS to increase the throughput or reduce the experienced latency, thus satisfying such services constraints. Consequently, path planning must be designed so that UAVs can keep stable links with moving vehicles. In this sense, Reinforcement Learning (RL) techniques are becoming the main enabler for solving such problem, since they offer the possibility to learn how to act in an environment with little prior information, given that full knowledge of the scenario is usually not available. In this paper, we present a RL-based approach to solve the path planning problem in a vehicular scenario, where UAVs, exploiting beamforming, are required to follow as long as possible moving cars. Different RL architectures, as well as a benchmark solution not using RL, are compared to select the best strategy maximising the sum throughput.

Reinforcement Learning-Based Trajectory Planning For UAV-aided Vehicular Communications

Marini R.;Spampinato L.;Mignardi S.;Verdone R.;Buratti C.
2022

Abstract

It is widely expected that 6G networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations (BSs) to provide a wide range of services that current networks cannot handle. One of the major trends deals with Vehicle-To-Everything (V2X) communications, where vehicles must be connected to the network to offer applications such as advanced driving and extended sensing. In this context, vehicles could deeply count on flying BS to increase the throughput or reduce the experienced latency, thus satisfying such services constraints. Consequently, path planning must be designed so that UAVs can keep stable links with moving vehicles. In this sense, Reinforcement Learning (RL) techniques are becoming the main enabler for solving such problem, since they offer the possibility to learn how to act in an environment with little prior information, given that full knowledge of the scenario is usually not available. In this paper, we present a RL-based approach to solve the path planning problem in a vehicular scenario, where UAVs, exploiting beamforming, are required to follow as long as possible moving cars. Different RL architectures, as well as a benchmark solution not using RL, are compared to select the best strategy maximising the sum throughput.
European Signal Processing Conference
967
971
Marini R.; Spampinato L.; Mignardi S.; Verdone R.; Buratti C.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/907166
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact