Matrix-Inverse-Free Deep Unfolding of the Weighted MMSE Beamforming Algorithm

Downlink beamforming is a key technology for cellular networks. However, computing beamformers that maximize the weighted sum rate (WSR) subject to a power constraint is an NP-hard problem. The popular weighted minimum mean square error (WMMSE) algorithm converges to a local optimum but still exhibi...

Full description

Saved in:
Bibliographic Details
Published inIEEE open journal of the Communications Society Vol. 3; pp. 65 - 81
Main Authors Pellaco, Lissy, Bengtsson, Mats, Jalden, Joakim
Format Journal Article
LanguageEnglish
Published New York IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Downlink beamforming is a key technology for cellular networks. However, computing beamformers that maximize the weighted sum rate (WSR) subject to a power constraint is an NP-hard problem. The popular weighted minimum mean square error (WMMSE) algorithm converges to a local optimum but still exhibits considerable complexity. In order to address this trade-off between complexity and performance, we propose to apply deep unfolding to the WMMSE algorithm for a MU-MISO downlink channel. The main idea consists of mapping a fixed number of iterations of the WMMSE into trainable neural network layers. However, the formulation of the WMMSE algorithm, as provided in Shi et al. , involves matrix inversions, eigendecompositions, and bisection searches. These operations are hard to implement as standard network layers. Therefore, we present a variant of the WMMSE algorithm i) that circumvents these operations by applying a projected gradient descent and ii) that, as a result, involves only operations that can be efficiently computed in parallel on hardware platforms designed for deep learning. We demonstrate that our variant of the WMMSE algorithm convergences to a stationary point of the WSR maximization problem and we accelerate its convergence by incorporating Nesterov acceleration and a generalization thereof as learnable structures. By means of simulations, we show that the proposed network architecture i) performs on par with the WMMSE algorithm truncated to the same number of iterations, yet at a lower complexity, and ii) generalizes well to changes in the channel distribution.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2644-125X
2644-125X
DOI:10.1109/OJCOMS.2021.3139858