End-to-end learned image compression with conditional latent space modeling for entropy coding

2021-01-24
Yesilyurt, Aziz Berkay
Kamışlı, Fatih
© 2021 European Signal Processing Conference, EUSIPCO. All rights reserved.The use of neural networks in image compression enables transforms and probability models for entropy coding which can process images based on much more complex models than the simple Gauss-Markov models in traditional compression methods. All at the expense of higher computational complexity. In the neural-network based image compression literature, various methods to model the dependencies in the transform domain/latent space are proposed. This work uses an alternative method to exploit the dependencies of the latent representation. The joint density of the latent representation is modeled as a product of conditional densities, which are learned using neural networks. However, each latent variable is not conditioned on all previous latent variables as in the chain rule of factoring joint distributions, but only on a few previous variables, in particular the left, upper and upper-left spatial neighbor variables based on a Markov property assumption for a simpler model and algorthm. The compression performance is comparable with the state- of-the-art compression models, while the conditional densities require a much simpler network and training time due to their simplicity and less number of parameters then its counterparts.
Citation Formats
A. B. Yesilyurt and F. Kamışlı, “End-to-end learned image compression with conditional latent space modeling for entropy coding,” presented at the 28th European Signal Processing Conference, EUSIPCO 2020, Amsterdam, Hollanda, 2021, Accessed: 00, 2021. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85099287734&origin=inward.