Multi Variable-layer Neural Networks for Decoding Linear Codes

The belief propagation algorithm is a state of the art decoding technique for a variety of linear codes such as LDPC codes. The iterative structure of this algorithm is reminiscent of a neural network with multiple layers. Indeed, this similarity has been recently exploited to improve the decoding performance by tuning the weights of the equivalent neural network. In this paper, we introduce a new network architecture by increasing the number of variable-node layers, while keeping the check-node layers unchanged. The changes are applied in a manner that the decoding performance of the network becomes independent of the transmitted codeword; hence, a training stage with only the all-zero codeword shall be sufficient. Simulation results on a number of well-studied linear codes, besides an improvement in the decoding performance, indicate that the new architecture is also simpler than some of the existing decoding networks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here