Vehicular Edge Computing (VEC) is considered a major enabler for multi-service vehicular 6G scenarios. However, limited computation, communication, and storage resources of terrestrial edge servers are becoming a bottleneck and hindering the performance of VEC-enabled Vehicular Networks (VNs). Aerial platforms are considered a viable solution allowing for extended coverage and expanding available resources. However, in such a dynamic scenario, it is important to perform a proper service placement based on the users' demands. Furthermore, with limited computing and communication resources, proper user-server assignments and offloading strategies need to be adopted. Considering their different time scales, a multi-time-scale optimization process is proposed here to address the joint service placement, network selection, and computation offloading problem effectively. With this scope in mind, we propose a multi-time-scale Markov Decision Process (MDP) based Reinforcement Learning (RL) to solve this problem and improve the latency and energy performance of VEC-enabled VNs. Given the complex nature of the joint optimization process, an advanced deep Q-learning method is considered. Comparison with various benchmark methods shows an overall improvement in latency and energy performance in different VN scenarios.

Shinde, S.S., Tarchi, D. (In stampa/Attività in corso). Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, Early Access, 1-15 [10.1109/tnse.2024.3445890].

Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios

Shinde, Swapnil Sadashiv;Tarchi, Daniele
In corso di stampa

Abstract

Vehicular Edge Computing (VEC) is considered a major enabler for multi-service vehicular 6G scenarios. However, limited computation, communication, and storage resources of terrestrial edge servers are becoming a bottleneck and hindering the performance of VEC-enabled Vehicular Networks (VNs). Aerial platforms are considered a viable solution allowing for extended coverage and expanding available resources. However, in such a dynamic scenario, it is important to perform a proper service placement based on the users' demands. Furthermore, with limited computing and communication resources, proper user-server assignments and offloading strategies need to be adopted. Considering their different time scales, a multi-time-scale optimization process is proposed here to address the joint service placement, network selection, and computation offloading problem effectively. With this scope in mind, we propose a multi-time-scale Markov Decision Process (MDP) based Reinforcement Learning (RL) to solve this problem and improve the latency and energy performance of VEC-enabled VNs. Given the complex nature of the joint optimization process, an advanced deep Q-learning method is considered. Comparison with various benchmark methods shows an overall improvement in latency and energy performance in different VN scenarios.
In corso di stampa
Shinde, S.S., Tarchi, D. (In stampa/Attività in corso). Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, Early Access, 1-15 [10.1109/tnse.2024.3445890].
Shinde, Swapnil Sadashiv; Tarchi, Daniele
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/979017
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact