https://arxiv.org/pdf/1703.02156v1.pdf On the Limits of Learning Representations with Label-Based Supervision

Will the representations learned from these generative methods ever rival the quality of those from their supervised competitors? In this work, we argue in the affirmative, that from an information theoretic perspective, generative models have greater potential for representation learning. Based on several experimentally validated assumptions, we show that supervised learning is upper bounded in its capacity for representation learning in ways that certain generative models, such as Generative Adversarial Networks (GANs) are not.

https://arxiv.org/abs/1608.08984 Towards Competitive Classifiers for Unbalanced Classification Problems: A Study on the Performance Scores

Although a great methodological effort has been invested in proposing competitive solutions to the class-imbalance problem, little effort has been made in pursuing a theoretical understanding of this matter.

https://arxiv.org/abs/1703.08774v1 Who Said What: Modeling Individual Labelers Improves Classification

To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data.