Cognitive Radio Ad Hoc Networks (CRAHNs) must identify the best operational characteristics based on the local spectrum availability, reachability with other nodes, choice of spectrum, while maintaining an acceptable end-to-end performance. The distributed nature of the operation forces each node to act autonomously, and yet has a goal of optimizing the overall network performance. These unique characteristics of CRAHNs make reinforcement learning (RL) techniques an attractive choice as a tool for protocol design. In this paper, we survey the state-of-the-art in the existing RL schemes that can be applied to CRAHNs, and propose modifications from the viewpoint of routing, and link layer spectrum-aware operations. We provide a framework of applying RL techniques for joint power and spectrum allocation as an example of Q-learning. Finally, through simulation study, we demonstrate the benefits of using RL schemes in dynamic spectrum conditions.
Di Felice M., Chowdhury K.R., Wu C., Bononi L., Meleis W. (2010). Learning-Based Spectrum Selection in Cognitive Radio Ad Hoc Networks. BERLIN : Springer Verlag Berlin / Heidelberg [10.1007/978-3-642-13315-2_11].
Learning-Based Spectrum Selection in Cognitive Radio Ad Hoc Networks
DI FELICE, MARCO;BONONI, LUCIANO;
2010
Abstract
Cognitive Radio Ad Hoc Networks (CRAHNs) must identify the best operational characteristics based on the local spectrum availability, reachability with other nodes, choice of spectrum, while maintaining an acceptable end-to-end performance. The distributed nature of the operation forces each node to act autonomously, and yet has a goal of optimizing the overall network performance. These unique characteristics of CRAHNs make reinforcement learning (RL) techniques an attractive choice as a tool for protocol design. In this paper, we survey the state-of-the-art in the existing RL schemes that can be applied to CRAHNs, and propose modifications from the viewpoint of routing, and link layer spectrum-aware operations. We provide a framework of applying RL techniques for joint power and spectrum allocation as an example of Q-learning. Finally, through simulation study, we demonstrate the benefits of using RL schemes in dynamic spectrum conditions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.