https://openreview.net/forum?id=rJiNwv9gg Lossy Image Compression with Compressive Autoencoders

We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.

Unfortunately, lossy compression is an inherently non-differentiable problem. In particular, quantization is an integral part of the compression pipeline but is not differentiable. This makes it difficult to train neural networks for this task. . We propose a simple but effective approach for dealing with the non-differentiability of rounding-based quantization, and for approximating the non-differentiable cost of coding the generated coefficients.

The derivative of the rounding function is zero everywhere except at integers, where it is undefined. We propose to replace its derivative in the backward pass of backpropagation (Rumelhart et al., 1986) with the derivative of a smooth approximation, r. Empirically, we found the identity, r(y) = y, to work as well as more sophisticated choices. This makes this operation easy to implement, as we simply have to pass gradients without modification from the decoder to the encoder.

source code: https://github.com/tensorflow/models/tree/2390974a/compression

https://arxiv.org/abs/1703.01467 Generative Compression

Here we describe the concept of generative compression, the compression of data using generative models, and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes.

https://arxiv.org/abs/1704.00648 Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations

We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.

https://openreview.net/forum?id=BJRZzFlRb&noteId=BJRZzFlRb Compressing Word Embeddings via Deep Compositional Code Learning