Improving Learning Efficiency for Wireless Resource Allocation with Symmetric Prior
Improving learning efficiency is paramount for learning resource allocation with deep neural networks (DNNs) in wireless communications over highly dynamic environments. Incorporating domain knowledge into learning is a promising approach to dealing with this issue. It is also an emerging topic in t...
Saved in:
Published in | IEEE wireless communications Vol. 29; no. 2; pp. 162 - 168 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Improving learning efficiency is paramount for learning resource allocation with deep neural networks (DNNs) in wireless communications over highly dynamic environments. Incorporating domain knowledge into learning is a promising approach to dealing with this issue. It is also an emerging topic in the wireless community. In this article, we briefly summarize two approaches for using domain knowledge: introducing a mathematical model and prior knowledge to deep learning. Then, we consider a type of symmetric prior permutation equivariance, which widely exists in wireless tasks. To explain how such a generic prior is harnessed to improve learning efficiency, we resort to ranking, which jointly sorts the input and output of a DNN. We use power allocation among subcarriers, probabilistic content caching, and interference coordination to illustrate the improvement of learning efficiency by exploiting the property. From the case study, we find that the required training samples to achieve given system performance decreases with the number of subcarriers or contents, owing to an interesting phenomenon called "sample hardening." Simulation results show that the training samples, the free parameters in DNNs, and the training time can be reduced dramatically by harnessing the prior knowledge. The samples required to train a DNN after ranking can be reduced by 15 ∼ 2,400 folds to achieve the same system performance as the counterpart without using prior. |
---|---|
ISSN: | 1536-1284 1558-0687 |
DOI: | 10.1109/MWC.003.21003437 |