Replica Exchange Spatial Adaptive Play for Channel Allocation in Cognitive Radio Networks

This paper proposes a novel channel allocation scheme based on the replica exchange Monte Carlo method (REMCMC). Some distributed channel allocation schemes in the literature formulate the channel allocation problem as a potential game, in which the unilateral improvement dynamics is guaranteed to c...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring) pp. 1 - 5
Main Authors Deng, Wangdong, Kamiya, Shotaro, Yamamoto, Koji, Nishio, Takayuki, Morikura, Masahiro
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes a novel channel allocation scheme based on the replica exchange Monte Carlo method (REMCMC). Some distributed channel allocation schemes in the literature formulate the channel allocation problem as a potential game, in which the unilateral improvement dynamics is guaranteed to converge to a Nash equilibrium. In general, spatial adaptive play (SAP), which is one of the representative learning algorithms in the potential game-based approach, can reach an optimal Nash equilibrium stochastically. However, this is inefficient for the channel allocation and SAP tends to be stuck in a sub-optimal Nash equilibrium in a limited time. To assist in finding the optimal Nash equilibrium for this kind of channel allocation problem, we apply the REMCMC to the existing potential game-based channel allocation. We show that SAP can be considered as a sampling process of the Boltzmann- Gibbs distribution and sampling methods can be utilized. We evaluated the proposed algorithm through simulations and the results show that the proposed algorithm can find the optimal Nash equilibrium quickly.
ISSN:2577-2465
DOI:10.1109/VTCSpring.2019.8746346