We consider the problem of designing and training a neural network-based orchestrator for fog computing service deployment. Our goal is to train an orchestrator able to optimize diversified and competing QoS requirements, such as blocking probability and service delay, while potentially supporting thousands of fog nodes. To cope with said challenges, we implement our neural orchestrator as a Deep Set (DS) network operating on sets of fog nodes, and we leverage Deep Reinforcement Learning (DRL) with invalid action masking to find an optimal trade-off between competing objectives. Illustrative numerical results show that our Deep Set-based policy generalizes well to problem sizes (i.e., in terms of numbers of fog nodes) up to two orders of magnitude larger than the ones seen during the training phase, outperforming both greedy heuristics and traditional Multi-Layer Perceptron (MLP)-based DRL. In addition, inference times of our DS-based policy are up to an order of magnitude faster than an MLP, allowing for excellent scalability and near real-time online decision-making.

DRL-FORCH: A Scalable Deep Reinforcement Learning-based Fog Computing Orchestrator / Di Cicco, Nicola; Pittalà, Gaetano Francesco; Davoli, Gianluca; Borsatti, Davide; Cerroni, Walter; Raffaelli, Carla; Tornatore, Massimo. - ELETTRONICO. - (2023), pp. 125-133. (Intervento presentato al convegno 2023 IEEE 9th International Conference on Network Softwarization tenutosi a Madrid, Spain nel 19-23 June 2023) [10.1109/NetSoft57336.2023.10175398].

DRL-FORCH: A Scalable Deep Reinforcement Learning-based Fog Computing Orchestrator

Pittalà, Gaetano Francesco;Davoli, Gianluca;Borsatti, Davide;Cerroni, Walter;Raffaelli, Carla;
2023

Abstract

We consider the problem of designing and training a neural network-based orchestrator for fog computing service deployment. Our goal is to train an orchestrator able to optimize diversified and competing QoS requirements, such as blocking probability and service delay, while potentially supporting thousands of fog nodes. To cope with said challenges, we implement our neural orchestrator as a Deep Set (DS) network operating on sets of fog nodes, and we leverage Deep Reinforcement Learning (DRL) with invalid action masking to find an optimal trade-off between competing objectives. Illustrative numerical results show that our Deep Set-based policy generalizes well to problem sizes (i.e., in terms of numbers of fog nodes) up to two orders of magnitude larger than the ones seen during the training phase, outperforming both greedy heuristics and traditional Multi-Layer Perceptron (MLP)-based DRL. In addition, inference times of our DS-based policy are up to an order of magnitude faster than an MLP, allowing for excellent scalability and near real-time online decision-making.
2023
Proc. of 2023 IEEE 9th International Conference on Network Softwarization (NetSoft)
125
133
DRL-FORCH: A Scalable Deep Reinforcement Learning-based Fog Computing Orchestrator / Di Cicco, Nicola; Pittalà, Gaetano Francesco; Davoli, Gianluca; Borsatti, Davide; Cerroni, Walter; Raffaelli, Carla; Tornatore, Massimo. - ELETTRONICO. - (2023), pp. 125-133. (Intervento presentato al convegno 2023 IEEE 9th International Conference on Network Softwarization tenutosi a Madrid, Spain nel 19-23 June 2023) [10.1109/NetSoft57336.2023.10175398].
Di Cicco, Nicola; Pittalà, Gaetano Francesco; Davoli, Gianluca; Borsatti, Davide; Cerroni, Walter; Raffaelli, Carla; Tornatore, Massimo
File in questo prodotto:
File Dimensione Formato  
POSTPRINT.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.8 MB
Formato Adobe PDF
1.8 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/934894
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact