Replica Exchange Spatial Adaptive Play for Channel Allocation in Cognitive Radio Networks
This paper proposes a novel channel allocation scheme based on the replica exchange Monte Carlo method (REMCMC). Some distributed channel allocation schemes in the literature formulate the channel allocation problem as a potential game, in which the unilateral improvement dynamics is guaranteed to c...
Saved in:
Published in | 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring) pp. 1 - 5 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.04.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper proposes a novel channel allocation scheme based on the replica exchange Monte Carlo method (REMCMC). Some distributed channel allocation schemes in the literature formulate the channel allocation problem as a potential game, in which the unilateral improvement dynamics is guaranteed to converge to a Nash equilibrium. In general, spatial adaptive play (SAP), which is one of the representative learning algorithms in the potential game-based approach, can reach an optimal Nash equilibrium stochastically. However, this is inefficient for the channel allocation and SAP tends to be stuck in a sub-optimal Nash equilibrium in a limited time. To assist in finding the optimal Nash equilibrium for this kind of channel allocation problem, we apply the REMCMC to the existing potential game-based channel allocation. We show that SAP can be considered as a sampling process of the Boltzmann- Gibbs distribution and sampling methods can be utilized. We evaluated the proposed algorithm through simulations and the results show that the proposed algorithm can find the optimal Nash equilibrium quickly. |
---|---|
ISSN: | 2577-2465 |
DOI: | 10.1109/VTCSpring.2019.8746346 |