Curriculum Training

Aliases

Intent

Train the network with the easiest examples first and gradually increasing the difficulty.

Motivation

How can we speed up training?

Sketch

This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern. Discussion

This is the main section of the pattern that goes in greater detail to explain the pattern. We leverage a vocabulary that we describe in the theory section of this book. We don’t go into intense detail into providing proofs but rather reference the sources of the proofs. How the motivation is addressed is expounded upon in this section. We also include additional questions that may be interesting topics for future research.

Known Uses

Here we review several projects or papers that have used this pattern.

Related Patterns In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.

Relationship to Canonical Patterns

Relationship to other Patterns

We provide here some additional external material that will help in exploring this pattern in more detail.

References

To aid in reading, we include sources that are referenced in the text in the pattern.

http://www.deeplearningbook.org/contents/optimization.html 8.7.6 Continuation Methods and Curriculum Learning As argued in Sec. 8.2.7, many of the challenges in optimization arise from the global structure of the cost function and cannot be resolved merely by making better estimates of local update directions. The predominant strategy for overcoming this problem is to attempt to initialize the parameters in a region that is connected to the solution by a short path through parameter space that local descent can discover.

The order which you present examples makes a difference. Where you start and how it orients the solution. Continuation Methods, a family of objectives, gradually transform keep track of minimum as you go forward.

http://arxiv.org/pdf/1511.06343v3.pdf We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank.

Self-Paced Learning: an Implicit Regularization Perspective

Self-paced learning (SPL) mimics the cognitive mechanism of humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by the minimizer functions. Existing methods usually pursue this by artificially designing the explicit form of regularizers. In this paper, we focus on the minimizer functions, and study a group of new regularizers, named self-paced implicit regularizers that are derived from convex conjugacy. Based on the multiplicative form of half-quadratic optimization, convex and non-convex functions induced minimizer functions for the implicit regularizers are developed. And a general framework (named SPL-IR) for SPL is developed accordingly. We further analyze the relation between SPLIR and half-quadratic optimization. We implement SPL-IR to matrix factorization and multi-view clustering. Experimental results on both synthetic and real-world databases corroborate our ideas and demonstrate the effectiveness of implicit regularizers.

Recently Proposed Self-paced Regularizers g(v, λ) and their Corresponding v∗(λ, ℓ).

http://ronan.collobert.com/pub/matos/2009_curriculum_icml.pdf Curriculum Learning

Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Here, we formalize such training strategies in the context of machine learning, and call them “curriculum learning”. In the context of recent research studying the difficulty of training in the presence of non-convex training criteria (for deep deterministic and stochastic neural networks), we explore curriculum learning in various set-ups. The experiments show that significant improvements in generalization can be achieved. We hypothesize that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and, in the case of non-convex criteria, on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions).

http://arxiv.org/abs/1608.04980v1 Mollifying Networks

Our proposition is inspired by the recent studies in continuation methods: similar to curriculum methods, we begin learning an easier (possibly convex) objective function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, objective function. The complexity of the mollified networks is controlled by a single hyperparameter which is annealed during the training. We show improvements on various difficult optimization tasks and establish a relationship with recent works on continuation methods for neural networks and mollifiers.

A sequence of optimization problems of increasing complexity, where the first ones are easy to solve but only the last one corresponds to the actual problem of interest. It is possible to tackle the problems in order, starting each time at the solution of the previous one and tracking the local minima along the way.

Top: Stochastic depth. Bottom: mollifying network. The dashed line represents the optional residual connection. In the top path, the input is processed with a convolutional block followed by a noisy activation function, while in the bottom path the original activation of the layer l − 1 is propagated untouched. For each unit, one of the two paths in picked according to a binary stochastic decision π

http://openreview.net/pdf?id=r1IRctqxg SAMPLE IMPORTANCE IN TRAINING DEEP NEURAL NETWORKS

We found that “easy” samples – samples that are correctly and confidently classified at the end of the training – shape parameters closer to the output, while the “hard” samples impact parameters closer to the input to the network. Further, “easy” samples are relevant in the early training stages, and “hard” in the late training stage. Further, we show that constructing batches which contain samples of comparable difficulties tends to be a poor strategy compared to maintaining a mix of both hard and easy samples in all of the batches. Interestingly, this contradicts some of the results on curriculum learning which suggest that ordering training examples in terms of difficulty can lead to better performance.

Our experiments show that it is important to mix hard samples into different batches rather than keep them together in the same batch and away from other examples.

http://openreview.net/pdf?id=BJAFbaolg LEARNING TO GENERATE SAMPLES FROM NOISE THROUGH INFUSION TRAINING

We presented a new training procedure that allows a neural network to learn a transition operator of a Markov chain. Compared to previously proposed methods of (Sohl-Dickstein et al., 2015) based on inverting a slow diffusion process, we showed empirically that infusion training requires far fewer denoising steps, and appears to provide more accurate models.

https://arxiv.org/abs/1611.03068 Incremental Sequence Learning

We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning.

https://arxiv.org/abs/1702.08635v1 Learning What Data to Learn

In this paper, we propose a deep reinforcement learning framework, which we call Neural Data Filter (NDF), to explore automatic and adaptive data selection in the training process. In particular, NDF takes advantage of a deep neural network to adaptively select and filter important data instances from a sequential stream of training data, such that the future accumulative reward (e.g., the convergence speed) is maximized. In contrast to previous studies in data selection that is mainly based on heuristic strategies, NDF is quite generic and thus can be widely suitable for many machine learning tasks. Taking neural network training with stochastic gradient descent (SGD) as an example, comprehensive experiments with respect to various neural network modeling (e.g., multi-layer perceptron networks, convolutional neural networks and recurrent neural networks) and several applications (e.g., image classification and text understanding) demonstrate that NDF powered SGD can achieve comparable accuracy with standard SGD process by using less data and fewer iterations.

https://arxiv.org/pdf/1702.08653v1.pdf Scaffolding Networks for Teaching and Learning to Comprehend

In scaffolding teaching, students are gradually asked questions to build background knowledge, clear up confusions, learn to be attentive, and improve comprehension. Inspired by this approach, we explore methods for teaching machines to learn to reason over text documents through asking questions about the past information. We address three key challenges in teaching and learning to reason: 1) the need for an effective architecture that learns from the information in text and keeps it in memory; 2) the difficulty of self-assessing what is learned at any given point and what is left to be learned; 3) the difficulty of teaching reasoning in a scalable way. To address the first challenge, we present the Scaffolding Network, an attention-based neural network agent that can reason over a dynamic memory. It learns a policy using reinforcement learning to incrementally register new information about concepts and their relations. For the second challenge, we describe a question simulator as part of the scaffolding network that learns to continuously question the agent about the information processed so far. Through questioning, the agent learns to correctly answer as many questions as possible. For the last challenge, we explore training with reduced annotated data.

https://arxiv.org/abs/1512.08562v3 Taming the Noise in Reinforcement Learning via Soft Updates

Model-free reinforcement learning algorithms, such as Q-learning, perform poorly in the early stages of learning in noisy environments, because much effort is spent unlearning biased estimates of the state-action value function. The bias results from selecting, among several noisy estimates, the apparent optimum, which may actually be suboptimal. We propose G-learning, a new off-policy learning algorithm that regularizes the value estimates by penalizing deterministic policies in the beginning of the learning process. We show that this method reduces the bias of the value-function estimation, leading to faster convergence to the optimal value and the optimal policy. Moreover, G-learning enables the natural incorporation of prior domain knowledge, when available. The stochastic nature of G-learning also makes it avoid some exploration costs, a property usually attributed only to on-policy algorithms. We illustrate these ideas in several examples, where G-learning results in significant improvements of the convergence rate and the cost of the learning process.

https://arxiv.org/abs/1704.03003v1 Automated Curriculum Learning for Neural Networks

We introduce a method for automatically selecting the path, or syllabus, that a neural network follows through a curriculum so as to maximise learning efficiency. A measure of the amount that the network learns from each data sample is provided as a reward signal to a nonstationary multi-armed bandit algorithm, which then determines a stochastic syllabus. We consider a range of signals derived from two distinct indicators of learning progress: rate of increase in prediction accuracy, and rate of increase in network complexity. Experimental results for LSTM networks on three curricula demonstrate that our approach can significantly accelerate learning, in some cases halving the time required to attain a satisfactory performance level.

https://arxiv.org/pdf/1705.06366v1.pdf Automatic Goal Generation for Reinforcement Learning Agents

We propose a new paradigm in reinforcement learning where the objective is to train a single policy to succeed on a variety of goals, under sparse rewards. To solve this problem we develop a method for automatic curriculum generation that dynamically adapts to the current performance of the agent. The curriculum is obtained without any prior knowledge of the environment or of the tasks being performed. We use generative adversarial training to automatically generate goals for our policy that are always at the appropriate level of difficulty (i.e. not too hard and not too easy).

https://arxiv.org/abs/1707.00183v1 Teacher-Student Curriculum Learning

We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student's performance is getting worse.

https://arxiv.org/abs/1707.05300v1 Reverse Curriculum Generation for Reinforcement Learning

we propose a method to learn these tasks without requiring any prior task knowledge other than obtaining a single state in which the task is achieved. The robot is trained in “reverse”, gradually learning to reach the goal from a set of starting positions increasingly far from the goal. Our method automatically generates a curriculum of starting positions that adapts to the agent's performance, leading to efficient training on such tasks. We demonstrate our approach on difficult simulated fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods.

https://arxiv.org/abs/1707.06742v2 Machine Teaching: A New Paradigm for Building Machine Learning Systems

While machine learning focuses on creating new algorithms and improving the accuracy of “learners”, the machine teaching discipline focuses on the efficacy of the “teachers”. Machine teaching as a discipline is a paradigm shift that follows and extends principles of software engineering and programming languages. We put a strong emphasis on the teacher and the teacher's interaction with data, as well as crucial components such as techniques and design principles of interaction and visualization.

https://arxiv.org/pdf/1707.08616v1.pdf Guiding Reinforcement Learning Exploration Using Natural Language

In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, under ideal and non-ideal conditions. This evaluation shows that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.

https://arxiv.org/abs/1709.06030v1 N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning

Our approach takes a larger teacher' network as input and outputs a compressed student' network derived from the teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward – a score based on the accuracy and compression of the network

https://arxiv.org/abs/1709.06009 Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents

https://arxiv.org/ftp/arxiv/papers/1709/1709.08761.pdf Image similarity using Deep CNN and Curriculum Learning

Image similarity involves fetching similar looking images given a reference image. Our solution called SimNet, is a deep siamese network which is trained on pairs of positive and negative images using a novel online pair mining strategy inspired by Curriculum learning. We also created a multi-scale CNN, where the final image embedding is a joint representation of top as well as lower layer embedding’s. We go on to show that this multi-scale siamese network is better at capturing fine grained image similarities than traditional CNN’s.

https://arxiv.org/abs/1710.05381 A systematic study of the class imbalance problem in convolutional neural networks

Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that totally eliminates the imbalance, whereas undersampling can perform better when the imbalance is only removed to some extent; (iv) as opposed to some classical machine learning models, oversampling does not necessarily cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.

https://arxiv.org/pdf/1710.11469.pdf Guarding Against Adversarial Domain Shifts with Counterfactual Regularization

https://arxiv.org/abs/1711.02301?twitter=@bigdata Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?

hese games have a number of appealing features: they are challenging for current learning approaches, but they form (i) a low-dimensional, simply parametrized environment where (ii) there is a linear closed form solution for optimal behavior from any state, and (iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way.

https://arxiv.org/abs/1711.00694v1 Interpretable and Pedagogical Examples

http://www.marcgbellemare.info/static/publications/graves17curiosity.pdf Automated Curriculum Learning for Neural Networks

http://bair.berkeley.edu/blog/2017/12/20/reverse-curriculum/ Reverse Curriculum Generation for Reinforcement Learning Agents

https://arxiv.org/abs/1802.06604 Learning High-level Representations from Demonstrations

https://arxiv.org/pdf/1803.02811.pdf Accelerated Methods for Deep Reinforcement Learning

We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm.

Our contribution is a framework for parallelized deep RL including novel techniques for GPU acceleration. Within this framework, we demonstrate multi-GPU versions of the following algorithms: Advantage Actor-Critic (Mnih et al., 2016), Proximal Policy Optimization (PPO) (Schulman et al., 2017), DQN (Mnih et al., 2015), Categorical DQN (Bellemare et al., 2017), and Rainbow (Hessel et al., 2017). Our target hardware is an NVIDIA DGX-1, which contains 40 CPU cores and 8 P100 GPUs.

We found that highly parallel sampling using batched inferences can accelerate experiment turn-around time of all algorithms without hindering training. We further found that neural networks can learn using batch sizes considerably larger than are standard, without harming sample complexity or final game score, and this dramatically speeds up learning.

https://arxiv.org/abs/1607.08723v4 Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner

We argue that instead of defining a sub-problem or simplifying the data, computational models should address the full complexity of the learning situation, and take as input the raw sensory signals available to infants. This implies that (1) accessible but privacy-preserving repositories of home data be setup and widely shared, and (2) models be evaluated at different linguistic levels through a benchmark of psycholinguist tests that can be passed by machines and humans alike, (3) linguistically and psychologically plausible learning architectures be scaled up to real data using probabilistic/optimization principles from machine learning. We discuss the feasibility of this approach and present preliminary results.

https://arxiv.org/abs/1803.03835 Kickstarting Deep Reinforcement Learning

We present a method for using previously-trained 'teacher' agents to kickstart the training of a new 'student' agent. To this end, we leverage ideas from policy distillation and population based training. Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (DMLab-30), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple 'expert' teachers which specialize on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10x fewer steps, and surpassing its final performance by 42 percent. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.

https://arxiv.org/abs/1803.11347v2 Learning to Adapt: Meta-Learning for Model-Based Control

To enable sample-efficient meta-learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach trains a global model such that, when combined with recent data, the model can be be rapidly adapted to the local context.

Through a combination of sampleefficient model-based learning and integration of off-policy data, our approach should be substantially more practical for real-world use than less efficient model-free meta-learning approaches, and the capability to adapt quickly is likely to be of particular importance under complex real-world dynamics.

We interpret this general idea of adapting models online as continuously fitting local models using the global model as a prior. In this work, we introduce two instantiations of this approach. The first is recurrence based adaptive control (RBAC), where a recurrent model is trained to learn its own update rule, which decides how to use recent data to adapt to the task at hand. The second is gradient based adaptive control (GBAC), which extends model-agnostic meta-learning algorithm (MAML). GBAC optimizes for initial model parameters such that a gradient descent update rule on a batch of recent data leads to fast and effective adaptation.

https://arxiv.org/abs/1805.03643v1 Learning to Teach

The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution.

https://arxiv.org/abs/1806.04640 Unsupervised Meta-Learning: Learning how to learn without having to be told how to learn: Researchers with the University of California at Berkeley have made meta-learning more tractable by reducing the amount of work a researchers needs to do to setup a meta-learning system. Their new 'unsupervised meta-learning' (ULM) approach lets their meta-learning agent automatically acquire distributions of tasks which it can subsequently perform meta-learning over. This deals with one drawback of meta-learning, which is that it is typically down to the human designer to come up with a set of tasks for the algorithm to be trained on. They also show how to combine ULM with other recently developed techniques like DIAYN (Diversity is all you need) for breaking environments down into collections of distinct tasks/states to train over.

Results: UML systems beat basic RL baselinets on simulated 2D navigation and locomotion tasks. They also tend to be obtain performance roughly equivalent to systems built with human-designed tuned reward functions, suggesting that UML can successfully explore the problem space enough to devise good reward signals for itself.

Why it matters: Because the diversity of tasks we'd like AI to do is much larger than the number of tasks we can neatly specify via hand-written rules it's crucial we develop methods that can rapidly acquire information from new environments and use this information to attack new problems. Meta-learning is one particularly promising approach to dealing with this problem, and by removing another one of its more expensive dependencies (a human-curated task distribution) UML may help push things forward. “An interesting direction to study in future work is the extension of unsupervised meta-learning to domains such as supervised classification, which might hold the promise of developing new unsupervised learning procedures powered by meta-learning,” the researchers write.

https://arxiv.org/pdf/1805.09501.pdf AutoAugment: Learning Augmentation Policies from Data

Our key insight is to create a search space of data augmentation policies, evaluating the quality of a particular policy directly on the dataset of interest. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset.

https://arxiv.org/abs/1806.08065v1 Learning Cognitive Models using Neural Networks

In this paper, we propose Cognitive Representation Learner (CogRL), a novel framework to learn accurate cognitive models in ill-structured domains with no data and little to no human knowledge engineering. Our contribution is two-fold: firstly, we show that representations learnt using CogRL can be used for accurate automatic cognitive model discovery without using any student performance data in several ill-structured domains: Rumble Blocks, Chinese Character, and Article Selection. This is especially effective and useful in domains where an accurate human-authored cognitive model is unavailable or authoring a cognitive model is difficult. Secondly, for domains where a cognitive model is available, we show that representations learned through CogRL can be used to get accurate estimates of skill difficulty and learning rate parameters without using any student performance data. Also https://www.learning-theories.org/doku.php?id=learning_paradigms_and_theories

https://arxiv.org/abs/1806.10729v1 Procedural Level Generation Improves Generality of Deep Reinforcement Learning

The level generator generate levels whose difficulty slowly increases in response to the observed performance of the agent. https://github.com/njustesen/a2c_gvgai

https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/ Learning Montezuma’s Revenge from a Single Demonstration

https://arxiv.org/abs/1807.03392v1 Evolving Multimodal Robot Behavior via Many Stepping Stones with the Combinatorial Multi-Objective Evolutionary Algorithm

we provide a thorough introduction and investigation of the Combinatorial Multi-Objective Evolutionary Algorithm (CMOEA), which avoids ordering subtasks by allowing all combinations of subtasks to be explored simultaneously

https://arxiv.org/abs/1807.06919v1 Backplay: “Man muss immer umkehren”

http://sebastianrisi.com/wp-content/uploads/volz_gecco18.pdf Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network

https://psyarxiv.com/eh5b6/ A unifying computational framework for teaching and active learning

https://arxiv.org/abs/1807.09295 Improved Training with Curriculum GANs

In this paper we introduce Curriculum GANs, a curriculum learning strategy for training Generative Adversarial Networks that increases the strength of the discriminator over the course of training, thereby making the learning task progressively more difficult for the generator. We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation. We also show evidence that this strategy may be broadly applicable to improving GAN training in other data modalities.

https://arxiv.org/abs/1808.00020v1 Online Adaptative Curriculum Learning for GANs

. We formalize this problem within the non-stationary Multi-Armed Bandit (MAB) framework, where we evaluate the capability of a bandit algorithm to select discriminators for providing the generator with feedback during learning. To this end, we propose a reward function which reflects the amount of knowledge learned by the generator and dynamically selects the optimal discriminator network.

https://arxiv.org/abs/1807.10299 Variational Option Discovery Algorithms

we propose a curriculum learning approach where the number of contexts seen by the agent increases whenever the agent's performance is strong enough (as measured by the decoder) on the current set of contexts. We show that this simple trick stabilizes training for VALOR and prior variational option discovery methods, allowing a single agent to learn many more modes of behavior than it could with a fixed context distribution.

https://arxiv.org/abs/1808.04888 Skill Rating for Generative Models

We show that a tournament consisting of a single model playing against past and future versions of itself produces a useful measure of training progress.

https://arxiv.org/abs/1802.10567 Learning by Playing - Solving Sparse Reward Tasks from Scratch

We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.

https://openreview.net/forum?id=r1Gsk3R9Fm Shallow Learning For Deep Networks

Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problemsleads to a CNN that exceeds AlexNet performance on ImageNet.

https://openreview.net/pdf?id=SylLYsCcFm Learning to Make Analogies by Contrasting Abstract Relational Structure

Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures.

https://openreview.net/forum?id=B1g-X3RqKm&noteId=B1g-X3RqKm A Proposed Hierarchy of Deep Learning Tasks

As the pace of deep learning innovation accelerates, it becomes increasingly important to organize the space of problems by relative difficultly. Looking to other fields for inspiration, we see analogies to the Chomsky Hierarchy in computational linguistics and time and space complexity in theoretical computer science.

https://arxiv.org/abs/1810.00597v1 Taming VAEs

e then introduce and analyze a practical algorithm termed Generalized ELBO with Constrained Optimization, GECO. The main advantage of GECO for the machine learning practitioner is a more intuitive, yet principled, process of tuning the loss. This involves defining of a set of constraints, which typically have an explicit relation to the desired model performance, in contrast to tweaking abstract hyper-parameters which implicitly affect the model behavior.

https://arxiv.org/abs/1810.08272v1 BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop

Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons, but given the poor data efficiency of the current learning methods, this goal may require substantial research efforts. Here, we introduce the BabyAI research platform to support investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. The levels gradually lead the agent towards acquiring a combinatorially rich synthetic language which is a proper subset of English. The platform also provides a heuristic expert agent for the purpose of simulating a human teacher. We report baseline results and estimate the amount of human involvement that would be required to train a neural network-based agent on some of the BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample efficient when it comes to learning a language with compositional properties.

CurriculumNet is a new training strategy able to train CNN models more efficiently on large-scale weakly-supervised web images, where no additional human annotation is provided. By leveraging the idea of curriculum learning, we propose a novel learning curriculum by measuring data complexity using cluster density. We show by experiments that the proposed approaches have strong capability for dealing with massive noisy labels. They not only reduce the negative affect of noisy labels, but also, notably, improve the model generalization ability by using the highly noisy data as a form of regularization. The proposed CurriculumNet achieved the state-of-the-art performance on the Webvision, ImageNet, Clothing-1M and Food-101 benchmarks. With an ensemble of multiple models, it obtained a Top 5 error of 5.2% on the Webvision Challenge 2017 (source). This result was the top performance by a wide margin, outperforming second place by a nearly 50% relative error rate.

https://arxiv.org/abs/1810.05762 GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning

https://arxiv.org/abs/1606.03476v1 Generative Adversarial Imitation Learning https://github.com/openai/imitation

https://github.com/xie9187/AsDDPG Learning with training wheels: Speeding up training with a simple controller for Deep Reinforcement Learning