Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Autoencoder-Based Error Correction Coding for One-Bit Quantization
Date
2020-06-01
Author
Balevi, Eren
Andrews, Jeffrey G.
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
144
views
0
downloads
Cite This
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which "perfectly" refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
Subject Keywords
Decoding
,
Training
,
Quantization (signal)
,
Turbo codes
,
Communication systems
,
Error correction codes
,
symbol Deep learning
,
error correction coding
,
one-bit quantization
,
DEEP
,
NETWORKS
URI
https://hdl.handle.net/11511/99967
Journal
IEEE TRANSACTIONS ON COMMUNICATIONS
DOI
https://doi.org/10.1109/tcomm.2020.2977280
Collections
Department of Electrical and Electronics Engineering, Article
Suggestions
OpenMETU
Core
Deep learning-based encoder for one-bit quantization
Balevi, Eren; Andrews, Jeffrey G. (2019-12-01)
© 2019 IEEE.This paper proposes a deep learning-based error correction coding for AWGN channels under the constraint of one-bit quantization in receivers. An autoencoder is designed and integrated with a turbo code that acts as an implicit regularization. This implicit regularizer facilitates approaching the Shannon bound for the one-bit quantized AWGN channels even if the autoencoder is trained suboptimally, since one-bit quantization stymies ideal training. Our empirical results show that the proposed cod...
High Rate Communication over One-Bit Quantized Channels via Deep Learning and LDPC Codes
Balevi, Eren; Andrews, Jeffrey G. (2020-05-01)
This paper proposes a method for designing error correction codes by combining a known coding scheme with an autoencoder. Specifically, we integrate an LDPC code with a trained autoencoder to develop an error correction code for intractable nonlinear channels. The LDPC encoder shrinks the input space of the autoencoder, which enables the autoencoder to learn more easily. The proposed error correction code shows promising results for one-bit quantization, a challenging case of a nonlinear channel. Specifical...
Optimized Transmission of 3D Video over DVB-H Channel
Bugdayci, Done; Akar, Gözde; Gotchev, Atanas (2012-01-17)
In this paper, we present a complete framework of an end-to-end error resilient transmission of 3D video over DVB-H and provide an analysis of transmission parameters. We perform the analysis for various layering, protection strategy and prediction structure using different contents and different channel conditions.
Packet loss resilient transmission of 3D models
Bici, M. Oguz; Norkin, Andrey; Akar, Gözde (2007-09-19)
This paper presents an efficient joint source-channel coding scheme based on forward error correction (FEC) for three dimensional (3D) models. The system employs a wavelet based zero-tree 3D mesh coder based on Progressive Geometry Compression (PGC). Reed-Solomon (RS) codes are applied to the embedded output bitstream to add resiliency to packet losses. Two-state Markovian channel model is employed to model packet losses. The proposed method applies approximately optimal and unequal FEC across packets. Ther...
Joint source-channel coding for error resilient transmission of static 3D models
Bici, Mehmet Oguz; Norkin, Andrey; Akar, Gözde (2012-01-01)
In this paper, performance analysis of joint source-channel coding techniques for error-resilient transmission of three dimensional (3D) models are presented. In particular, packet based transmission scenarios are analyzed. The packet loss resilient methods are classified into two groups according to progressive compression schemes employed: Compressed Progressive Meshes (CPM) based methods and wavelet based methods. In the first group, layers of CPM algorithm are protected unequally by Forward Error Correc...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Balevi and J. G. Andrews, “Autoencoder-Based Error Correction Coding for One-Bit Quantization,”
IEEE TRANSACTIONS ON COMMUNICATIONS
, vol. 68, no. 6, pp. 3440–3451, 2020, Accessed: 00, 2022. [Online]. Available: https://hdl.handle.net/11511/99967.