One of the major challenges of the near future will be the ability to identify appropriate methodologies for the transmission of information on the channel’s status and on the active reallocation of system resources among the operating nodes of the network. A feasible way is to build a network of intelligent nodes. Distributed cognitive intelligence must make it possible to configure and adapt nodes so that they can modify their state to maximize an objective function. In this approach, the major problem is how to establish communications between the nodes before they had the opportunity to exchange information on the status of the transmission channel. This condition can be brought back to the estimate of the probability of a future event. Reinforcement Learning (RL) can be considered a possible way to control our system. For obvious reasons our system is stochastic, and we can also assume that the data that each node has on the transmission channel’s state is sufficiently detailed. Markovian Decision Processes (MDP) are perfect for describing such an architecture. This way, we are transforming the investigation for a good control system in the research for a good value function. Many heuristic algorithms can produce excellent results in the field of self-organizing networks (SONs). Keywords Channel, Redundancy, Smart-Environment.
Andrea Piroddi (2021). Dynamic Resource Provisioning using Cognitive Intelligent Networks based on Stochastic Markov Decision Process. Montreal : Taylor & Francis Group.
Dynamic Resource Provisioning using Cognitive Intelligent Networks based on Stochastic Markov Decision Process
Andrea Piroddi
Primo
Conceptualization
2021
Abstract
One of the major challenges of the near future will be the ability to identify appropriate methodologies for the transmission of information on the channel’s status and on the active reallocation of system resources among the operating nodes of the network. A feasible way is to build a network of intelligent nodes. Distributed cognitive intelligence must make it possible to configure and adapt nodes so that they can modify their state to maximize an objective function. In this approach, the major problem is how to establish communications between the nodes before they had the opportunity to exchange information on the status of the transmission channel. This condition can be brought back to the estimate of the probability of a future event. Reinforcement Learning (RL) can be considered a possible way to control our system. For obvious reasons our system is stochastic, and we can also assume that the data that each node has on the transmission channel’s state is sufficiently detailed. Markovian Decision Processes (MDP) are perfect for describing such an architecture. This way, we are transforming the investigation for a good control system in the research for a good value function. Many heuristic algorithms can produce excellent results in the field of self-organizing networks (SONs). Keywords Channel, Redundancy, Smart-Environment.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


