The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging MultiObjective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training.
Di Cicco, N., Pittalà, G.F., Davoli, G., Borsatti, D., Cerroni, W., Raffaelli, C., et al. (2025). Scalable and Energy-Efficient Service Orchestration in the Edge-Cloud Continuum With Multi-Objective Reinforcement Learning. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 22(5), 3882-3894 [10.1109/tnsm.2025.3574131].
Scalable and Energy-Efficient Service Orchestration in the Edge-Cloud Continuum With Multi-Objective Reinforcement Learning
Pittalà, Gaetano Francesco;Davoli, Gianluca
;Borsatti, Davide;Cerroni, Walter;Raffaelli, Carla;
2025
Abstract
The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging MultiObjective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training.| File | Dimensione | Formato | |
|---|---|---|---|
|
postprint_TNSM3574131.pdf
embargo fino al 27/05/2027
Tipo:
Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza:
Licenza per accesso libero gratuito
Dimensione
2.14 MB
Formato
Adobe PDF
|
2.14 MB | Adobe PDF | Visualizza/Apri Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


