It is widely expected that 6G networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations (BSs) to provide a wide range of services that current networks cannot handle. One of the major trends deals with Vehicle-To-Everything (V2X) communications, where vehicles must be connected to the network to offer applications such as advanced driving and extended sensing. In this context, vehicles could deeply count on flying BS to increase the throughput or reduce the experienced latency, thus satisfying such services constraints. Consequently, path planning must be designed so that UAVs can keep stable links with moving vehicles. In this sense, Reinforcement Learning (RL) techniques are becoming the main enabler for solving such problem, since they offer the possibility to learn how to act in an environment with little prior information, given that full knowledge of the scenario is usually not available. In this paper, we present a RL-based approach to solve the path planning problem in a vehicular scenario, where UAVs, exploiting beamforming, are required to follow as long as possible moving cars. Different RL architectures, as well as a benchmark solution not using RL, are compared to select the best strategy maximising the sum throughput.
Marini R., Spampinato L., Mignardi S., Verdone R., Buratti C. (2022). Reinforcement Learning-Based Trajectory Planning For UAV-aided Vehicular Communications. European Signal Processing Conference, EUSIPCO.
Reinforcement Learning-Based Trajectory Planning For UAV-aided Vehicular Communications
Marini R.;Spampinato L.;Mignardi S.;Verdone R.;Buratti C.
2022
Abstract
It is widely expected that 6G networks will rely on Unmanned Aerial Vehicles (UAVs) acting as flying Base Stations (BSs) to provide a wide range of services that current networks cannot handle. One of the major trends deals with Vehicle-To-Everything (V2X) communications, where vehicles must be connected to the network to offer applications such as advanced driving and extended sensing. In this context, vehicles could deeply count on flying BS to increase the throughput or reduce the experienced latency, thus satisfying such services constraints. Consequently, path planning must be designed so that UAVs can keep stable links with moving vehicles. In this sense, Reinforcement Learning (RL) techniques are becoming the main enabler for solving such problem, since they offer the possibility to learn how to act in an environment with little prior information, given that full knowledge of the scenario is usually not available. In this paper, we present a RL-based approach to solve the path planning problem in a vehicular scenario, where UAVs, exploiting beamforming, are required to follow as long as possible moving cars. Different RL architectures, as well as a benchmark solution not using RL, are compared to select the best strategy maximising the sum throughput.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.