https://arxiv.org/pdf/1704.07816v1.pdf Introspective Classifier Learning: Empower Generatively

In this paper we propose introspective classi- fier learning (ICL) that emphasizes the importance of having a discriminative classifier empowered with generative capabilities. We develop a reclassification-by-synthesis algorithm to perform training using a formulation stemmed from the Bayes theory. Our classifier is able to iteratively: (1) synthesize pseudo-negative samples in the synthesis step; and (2) enhance itself by improving the classification in the reclassifi- cation step. The single classifier learned is at the same time generative — being able to directly synthesize new samples within its own discriminative model. We conduct experiments on standard benchmark datasets including MNIST, CIFAR, and SVHN using state-of-the-art CNN architectures, and observe improved classification results.

https://arxiv.org/pdf/1704.07820v1.pdf Introspective Generative Modeling: Decide Discriminatively

We study unsupervised learning by developing introspective generative modeling (IGM) that attains a generator using progressively learned deep convolutional neural networks. The generator is itself a discriminator, capable of introspection: being able to self-evaluate the difference between its generated samples and the given training data. When followed by repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. IGM learns a cascade of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and semisupervised learning.

https://arxiv.org/pdf/1711.08875v1.pdf Wasserstein Introspective Neural Networks

We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model. WINN is able to achieve model size reduction over the previous introspective neural networks (INN) by a factor of 20.