Name Reversible Layer

Intent

Motivation

Structure

<Diagram>

Discussion

Known Uses

Related Patterns

<Diagram>

References

http://arxiv.org/pdf/1511.05653v2.pdf

The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is A then the reverse transformation is AT.

(This can be seen as an explanation of the old weight tying idea for denoising autoencoders.)

(ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers.

We have highlighted an interesting empirical finding, that the weights of neural nets obtained by standard supervised training behave similarly to random numbers. We have given a mathematical proof that such mathematical properties can lead to a very simple explanation for why neural nets have an associated generative model, and furthermore one that is essentially the reversal of the forward computation with the same edge weights.

https://arxiv.org/abs/1808.09111 Unsupervised Learning of Syntactic Structure with Invertible Neural Projections

We show that the invertibility condition allows for efficient exact inference and marginal likelihood computation in our model so long as the prior is well-behaved.

https://arxiv.org/abs/1810.10999 Reversible Recurrent Neural Networks

https://blog.openai.com/glow/

https://arxiv.org/abs/1811.00002 https://nv-adlr.github.io/WaveGlow

https://openreview.net/forum?id=rJxgknCcK7