A fast reconstruction of the parity-check matrices of LDPC codes in a noisy environment

In non-cooperative communications, blind identification of the widely used LDPC codes is an emerging research hot spot. This is different from and more difficult than blind identification of channel coding in cooperative communications. Common disadvantages of the existing algorithms are poor fault...

Full description

Saved in:
Bibliographic Details
Published inComputer communications Vol. 176; pp. 163 - 172
Main Authors Liu, Qian, Zhang, Hao, Shen, Gaofeng, Mei, Fengtong
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In non-cooperative communications, blind identification of the widely used LDPC codes is an emerging research hot spot. This is different from and more difficult than blind identification of channel coding in cooperative communications. Common disadvantages of the existing algorithms are poor fault tolerance, high number of iterations, and only being applicable to short codes. Our proposed algorithm is based on the idea presented by Sicot et al. (2009) for binary phase-shift keying (BPSK) signals over the additive white Gaussian noise (AWGN) channel. Firstly, to sort the codewords according to a proper threshold, we present a threshold function and derive its zero point, which is verified to be the optimal threshold through experiments. This threshold improves the fault tolerance of our algorithm. Secondly, an operation called bidirectional Gaussian column elimination (BGCE) is proposed to replace Gaussian column elimination (GCE). This operation speeds up the process of deriving parity-checks, and considerably reduces the number of iterations. Thirdly, we use an existing technique for finding low-weight codewords as proposed by Canteaut and Chabaud (1998) to make the linearly independent parity-checks sparse, thereby realizing reconstruction of the sparse parity-check matrix of the LDPC code. The results of comparative experiments demonstrate that our algorithm outperforms existing algorithms.
ISSN:0140-3664
1873-703X
DOI:10.1016/j.comcom.2021.05.023