https://arxiv.org/abs/1505.05770 Variational Inference with Normalizing Flows

We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained.

https://arxiv.org/abs/1503.03585 Deep Unsupervised Learning using Nonequilibrium Thermodynamics

The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model.

http://openreview.net/pdf?id=BJAFbaolg LEARNING TO GENERATE SAMPLES FROM NOISE THROUGH INFUSION TRAINING

We presented a new training procedure that allows a neural network to learn a transition operator of a Markov chain. Compared to previously proposed methods of (Sohl-Dickstein et al., 2015) based on inverting a slow diffusion process, we showed empirically that infusion training requires far fewer denoising steps, and appears to provide more accurate models.

https://openreview.net/forum?id=rkpdnIqlx&noteId=rkpdnIqlx The Variational Walkback Algorithm

https://arxiv.org/pdf/1706.00885v3.pdf IDK Cascades: Fast Deep Learning by Learning not to Overthink

We introduce the “I Don't Know” (IDK) prediction cascades framework, a general framework for composing a set of pre-trained models to accelerate inference without a loss in prediction accuracy. We propose two search based methods for constructing cascades as well as a new cost-aware objective within this framework. We evaluate these techniques on a range of both benchmark and real-world datasets and demonstrate that prediction cascades can reduce computation by 37%, resulting in up to 1.6x speedups in image classification tasks over state-of-the-art models without a loss in accuracy.

https://arxiv.org/pdf/1706.02480v1.pdf Forward Thinking: Building and Training Neural Networks One Layer at a Time

We present a general framework for training deep neural networks without backpropagation. This substantially decreases training time and also allows for construction of deep networks with many sorts of learners, including networks whose layers are defined by functions that are not easily differentiated, like decision trees. The main idea is that layers can be trained one at a time, and once they are trained, the input data are mapped forward through the layer to create a new learning problem. The process is repeated, transforming the data through multiple layers, one at a time, rendering a new data set, which is expected to be better behaved, and on which a final output layer can achieve good performance. We call this forward thinking and demonstrate a proof of concept by achieving state-of-the-art accuracy on the MNIST dataset for convolutional neural networks. We also provide a general mathematical formulation of forward thinking that allows for other types of deep learning problems to be considered.