Open Radio Access Network (O-RAN) is an emerging paradigm proposed for enhancing the 5G network infrastructure. O-RAN promotes open vendor-neutral interfaces and virtualized network functions that enable the decoupling of network components and their optimization through intelligent controllers. The decomposition of base station functions enables better resource usage, but also opens new technical challenges concerning their efficient orchestration and allocation. In this paper, we propose Proactive Resource Orchestrator based on Reinforcement Learning (PRORL), a novel solution for the efficient and dynamic allocation of resources in O-RAN infrastructures. We frame the problem as a Markov Decision Process and solve it using Deep Reinforcement Learning; one relevant feature of PRORL is that it learns demand patterns from experience for proactive resource allocation. We extensively evaluate our proposal by using both synthetic and real-world data, showing that we can significantly outperform the existing algorithms, which are typically based on the analysis of static demands. More specifically, we achieve an improvement of 90% over greedy baselines and deal with complex trade-offs in terms of competing objectives such as demand satisfaction, resource utilization, and the inherent cost associated with allocating resources.
Staffolani, A., Darvariu, V., Foschini, L., Girolami, M., Bellavista, P., Musolesi, M. (2024). PRORL: Proactive Resource Orchestrator for Open RANs Using Deep Reinforcement Learning. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 21(4), 3933-3944 [10.1109/tnsm.2024.3373606].
PRORL: Proactive Resource Orchestrator for Open RANs Using Deep Reinforcement Learning
Staffolani, Alessandro;Foschini, Luca;Bellavista, Paolo;Musolesi, Mirco
2024
Abstract
Open Radio Access Network (O-RAN) is an emerging paradigm proposed for enhancing the 5G network infrastructure. O-RAN promotes open vendor-neutral interfaces and virtualized network functions that enable the decoupling of network components and their optimization through intelligent controllers. The decomposition of base station functions enables better resource usage, but also opens new technical challenges concerning their efficient orchestration and allocation. In this paper, we propose Proactive Resource Orchestrator based on Reinforcement Learning (PRORL), a novel solution for the efficient and dynamic allocation of resources in O-RAN infrastructures. We frame the problem as a Markov Decision Process and solve it using Deep Reinforcement Learning; one relevant feature of PRORL is that it learns demand patterns from experience for proactive resource allocation. We extensively evaluate our proposal by using both synthetic and real-world data, showing that we can significantly outperform the existing algorithms, which are typically based on the analysis of static demands. More specifically, we achieve an improvement of 90% over greedy baselines and deal with complex trade-offs in terms of competing objectives such as demand satisfaction, resource utilization, and the inherent cost associated with allocating resources.File | Dimensione | Formato | |
---|---|---|---|
PRORL_Proactive_Resource_Orchestrator_for_Open_RANs_Using_Deep_Reinforcement_Learning.pdf
accesso aperto
Tipo:
Versione (PDF) editoriale
Licenza:
Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione
3.43 MB
Formato
Adobe PDF
|
3.43 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.