Vehicular Edge Computing (VEC) is considered a major enabler for multi-service vehicular 6G scenarios. However, limited computation, communication, and storage resources of terrestrial edge servers are becoming a bottleneck and hindering the performance of VEC-enabled Vehicular Networks (VNs). Aerial platforms are considered a viable solution allowing for extended coverage and expanding available resources. However, in such a dynamic scenario, it is important to perform a proper service placement based on the users' demands. Furthermore, with limited computing and communication resources, proper user-server assignments and offloading strategies need to be adopted. Considering their different time scales, a multi-time-scale optimization process is proposed here to address the joint service placement, network selection, and computation offloading problem effectively. With this scope in mind, we propose a multi-time-scale Markov Decision Process (MDP) based Reinforcement Learning (RL) to solve this problem and improve the latency and energy performance of VEC-enabled VNs. Given the complex nature of the joint optimization process, an advanced deep Q-learning method is considered. Comparison with various benchmark methods shows an overall improvement in latency and energy performance in different VN scenarios.

Shinde, S.S., Tarchi, D. (2024). Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 11(6), 5364-5379 [10.1109/tnse.2024.3445890].

Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios

Shinde, Swapnil Sadashiv;Tarchi, Daniele
2024

Abstract

Vehicular Edge Computing (VEC) is considered a major enabler for multi-service vehicular 6G scenarios. However, limited computation, communication, and storage resources of terrestrial edge servers are becoming a bottleneck and hindering the performance of VEC-enabled Vehicular Networks (VNs). Aerial platforms are considered a viable solution allowing for extended coverage and expanding available resources. However, in such a dynamic scenario, it is important to perform a proper service placement based on the users' demands. Furthermore, with limited computing and communication resources, proper user-server assignments and offloading strategies need to be adopted. Considering their different time scales, a multi-time-scale optimization process is proposed here to address the joint service placement, network selection, and computation offloading problem effectively. With this scope in mind, we propose a multi-time-scale Markov Decision Process (MDP) based Reinforcement Learning (RL) to solve this problem and improve the latency and energy performance of VEC-enabled VNs. Given the complex nature of the joint optimization process, an advanced deep Q-learning method is considered. Comparison with various benchmark methods shows an overall improvement in latency and energy performance in different VN scenarios.
2024
Shinde, S.S., Tarchi, D. (2024). Multi-Time-Scale Markov Decision Process for Joint Service Placement, Network Selection, and Computation Offloading in Aerial IoV Scenarios. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 11(6), 5364-5379 [10.1109/tnse.2024.3445890].
Shinde, Swapnil Sadashiv; Tarchi, Daniele
File in questo prodotto:
File Dimensione Formato  
Multi-Time-Scale_Markov_Decision_Process_for_Joint_Service_Placement_Network_Selection_and_Computation_Offloading_in_Aerial_IoV_Scenarios.pdf

accesso aperto

Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 4.1 MB
Formato Adobe PDF
4.1 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/979017
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 6
social impact