Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
generative_model [2018/02/10 09:22]
admin
generative_model [2019/01/13 20:06] (current)
admin
Line 341: Line 341:
 and demonstrate that our method provides new and demonstrate that our method provides new
 insights into properties of GANs. insights into properties of GANs.
 +
 +https://​arxiv.org/​abs/​1804.08682 Boltzmann Encoded Adversarial Machines
 +
 +https://​arxiv.org/​abs/​1805.00020 A Guide to Constraining Effective Field Theories with Machine Learning
 +
 +The best results are found for likelihood ratio estimators trained with extra information about the score, the gradient of the log likelihood function with respect to the theory parameters. ​
 +
 +https://​arxiv.org/​pdf/​1805.08318.pdf Self-Attention Generative Adversarial Networks
 +
 +https://​avg.is.tuebingen.mpg.de/​research_projects/​convergence-and-stability-of-gan-training
 +https://​avg.is.tuebingen.mpg.de/​uploads_file/​attachment/​attachment/​424/​Mescheder2018ICML.pdf https://​github.com/​LMescheder/​GAN_stability
 +
 +https://​arxiv.org/​abs/​1807.00374v2 Augmented Cyclic Adversarial Learning for Domain Adaptation
 +
 +https://​arxiv.org/​abs/​1807.03026v1 Pioneer Networks: Progressively Growing Generative Autoencoder
 +we propose the Progressively Growing Generative Autoencoder (PIONEER) network which achieves high-quality reconstruction with 128×128 images without requiring a GAN discriminator
 +
 +https://​arxiv.org/​abs/​1807.04720 The GAN Landscape: Losses, Architectures,​ Regularization,​ and Normalization
 +
 +https://​arxiv.org/​abs/​1807.09295 Improved Training with Curriculum GANs
 +
 +In this paper we introduce Curriculum GANs, a curriculum learning strategy for training Generative Adversarial Networks that increases the strength of the discriminator over the course of training, thereby making the learning task progressively more difficult for the generator. We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation. We also show evidence that this strategy may be broadly applicable to improving GAN training in other data modalities.
 +
 +https://​lilianweng.github.io/​lil-log/​2017/​08/​20/​from-GAN-to-WGAN.html From GAN to WGAN
 +
 +https://​arxiv.org/​abs/​1809.02145v1 GANs beyond divergence minimization
 +
 + These results suggest that GANs do not conform well to the divergence minimization theory and form a much broader range of models than previously assumed.
 +
 +https://​openreview.net/​pdf?​id=B1xsqj09Fm LARGE SCALE GAN TRAINING FOR
 +HIGH FIDELITY NATURAL IMAGE SYNTHESIS
 +
 +https://​arxiv.org/​abs/​1810.09136v1 Do Deep Generative Models Know What They Don't Know?
 +
 +https://​ieeexplore.ieee.org/​stamp/​stamp.jsp?​arnumber=8520899 https://​github.com/​ToniCreswell/​InvertingGAN Inverting the Generator of a Generative Adversarial Network
 +
 +https://​arxiv.org/​abs/​1811.03259 Bias and Generalization in Deep Generative Models: An Empirical Study
 +
 +https://​arxiv.org/​abs/​1807.06358v2 IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis
 +
 +https://​openreview.net/​pdf?​id=HJxB5sRcFQ LAYOUTGAN: GENERATING GRAPHIC LAYOUTS
 +WITH WIREFRAME DISCRIMINATORS