Real-time Online Learning for Pattern Reconfigurable Antenna State Selection
Pattern reconfigurable antennas (PRAs) can dynamically change their radiation pattern and provide diversity and directional gain. These properties allow them to adapt to channel variations by steering directional beams toward desired transmissions and away from interference sources, thus enhancing t...
Saved in:
Published in | 2020 7th NAFOSTED Conference on Information and Computer Science (NICS) pp. 13 - 18 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
26.11.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Pattern reconfigurable antennas (PRAs) can dynamically change their radiation pattern and provide diversity and directional gain. These properties allow them to adapt to channel variations by steering directional beams toward desired transmissions and away from interference sources, thus enhancing the overall performance of a wireless communication system. To fully exploit the benefits of a PRA, the key challenge is being able to optimally select the antenna state in real time. Current literature on this topic, to the best of our knowledge, focuses on the design of algorithms to optimally select the best antenna mode with evaluation performed in simulation or postprocessing. In this study, we have not only designed a real-time online antenna state selection framework for SISO wireless links but we have also implemented it in an experimental software defined radio testbed. We benchmarked the multi-armed bandit algorithm against other antenna state selection algorithms and show how it can improve system performance by mitigating the effects of interference taking advantage of the directionality PRAs provide. We also show that when the optimal state changes over time the bandit approach does not work very well. For such a scenario, we show how the Adaptive Pursuit algorithm works well and can be a great solution. We also discuss what changes could be done to the bandit algorithm to work better in this case. |
---|---|
DOI: | 10.1109/NICS51282.2020.9335872 |