https://arxiv.org/pdf/1611.00201v1.pdf Towards Lifelong Self-Supervision: A Deep Learning Direction for Robotics

https://arxiv.org/pdf/1612.05596.pdf Event-driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines

https://arxiv.org/abs/1611.07725 iCaRL: Incremental Classifier and Representation Learning

A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on the CIFAR-100 and ImageNet ILSVRC 2012 datasets that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.

http://www.cs.cmu.edu/~tom/pubs/NELL_aaai15.pdf Never-Ending Learning

https://arxiv.org/abs/1805.06370 Progress & Compress: A scalable framework for continual learning

We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved through training two neural networks: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously learnt tasks. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. Thus, it is a learning process that may be sustained over a lifetime of tasks while supporting forward transfer and minimising forgetting. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.

http://www.pnas.org/content/early/2018/10/12/1800755115/ Comparing continual task learning in minds and machines

Here, we studied the patterns of errors made by humans and state-of-the-art neural networks while they learned new tasks from scratch and without instruction. Humans, but not machines, seem to benefit from training regimes that blocked one task at a time, especially when they had a prior bias to represent stimuli in a way that encouraged task separation. Machines trained to exhibit the same prior bias suffered less interference between tasks, suggesting new avenues for solving continual learning in artificial systems.

https://arxiv.org/abs/1805.06370v2 Progress & Compress: A scalable framework for continual learning