Memory Patterns

"We do not know where to look, or what to look for, when something is memorized. We do not know what it means, or what change there is in the nervous system, when a fact is learned. This is a very important problem which has not been solved at all."

- Richard P. Feynman

Some history (or ideas of how to classify this)

Memory Networks — lots of supervision at each read/write step and at final state.

End to End Memory Networks — less supervision, only at final state.

Key Value Memory Networks — the memories are key value pairs. You predict which key to use and add the value to the current state.

Forward Prediction Nets — predict the next future state (NIPS 2016).

Recurrent Entity Network — memories are not given but learned.

Note: From Schmidhuber NIPS 2016

Deep Learning systems are universal approximators, however current research are exploring different ways to improve recognition by incorporating memory elements.

Attention

Associative Memory

Recurrent Neural Networks

Differentiable Memory Access

Non-Differentiable Memory

Memory Enhanced Convolution

Stack Based Recursion

Episodic Memory

Replay

Recursion

Structured Memory

References

http://arxiv.org/abs/1410.3916 Memory Networks

Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.

http://arxiv.org/pdf/1606.03126v1.pdf Key-Value Memory Networks for Directly Reading Documents

https://research.facebook.com/publications/key-value-memory-networks-for-directly-reading-documents/

https://arxiv.org/pdf/1602.03032.pdf Associative Long Short-Term Memory

The system has an associative memory based on complex-valued vectors and is closely related to Holographic Reduced Representations and Long Short-Term Memory networks.

http://arxiv.org/pdf/1507.01526v3.pdf GridLSTM

http://arxiv.org/pdf/1511.06392v3.pdf NEURAL RANDOM-ACCESS MACHINES

https://arxiv.org/pdf/1506.07285.pdf Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

Memory networks [Weston et 2014] (FAIR), associative memory

Stacked-Augmented Recurrent Neural Net [Joulin & Mikolov 2014] (FAIR)

http://arxiv.org/abs/1506.07285 Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets.

https://deepmind.com/blog/differentiable-neural-computers/ Hybrid computing using a neural network with dynamic external memory Full paper: http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz

http://papers.nips.cc/paper/5648-learning-to-transduce-with-unbounded-memory.pdf Learning to Transduce with Unbounded Memory

These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in our transduction experiments.

https://arxiv.org/abs/1503.01007 Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

http://distill.pub/2016/augmented-rnns/ Attention and Augmented Recurrent Neural Networks

https://arxiv.org/abs/1610.06258 Using Fast Weights to Attend to the Recent Past

https://arxiv.org/abs/1610.06402 A Growing Long-term Episodic & Semantic Memory

To address this, we describe a lifelong learning system that leverages a fast, though non-differentiable, content-addressable memory which can be exploited to encode both a long history of sequential episodic knowledge and semantic knowledge over many episodes for an unbounded number of domains. This opens the door for investigation into transfer learning, and leveraging prior knowledge that has been learned over a lifetime of experiences to new domains.

http://openreview.net/pdf?id=S1yTEt9ex LEARNING TO REMEMBER RARE EVENTS

The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training.

The memory module presented above can be added to any classification network. There are two main choices: which layer to use to generate queries, and how to use the output of the module.

http://openreview.net/pdf?id=BkSmc8qll DYNAMIC NEURAL TURING MACHINE WITH CONTINUOUS AND DISCRETE ADDRESSING SCHEMES

A trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms.

https://openreview.net/pdf?id=rJTKKKqeg TRACKING THE WORLD STATE WITH RECURRENT ENTITY NETWORK

The Recurrent Entity Network (EntNet) is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. Like a Neural Turing Machine or Differentiable Neural Computer it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.

The model consists of a fixed number of dynamic memory cells, each containing a vector key wj and a vector value (or content) hj . Each cell is associated with its own “processor”, a simple gated recurrent network that may update the cell value given an input. If each cell learns to represent a concept or entity in the world, one can imagine a gating mechanism that, based on the key and content of the memory cells, will only modify the cells that concern the entities mentioned in the input. In the current version of the model, there is no direct interaction between the memory cells, hence the system can be seen as multiple identical processors functioning in parallel, with distributed local memory. Alternatively, the EntNet can be seen as a bank of gated RNNs (all sharing the same parameters), whose hidden states correspond to latent concepts and attributes. Their hidden state is updated only when new information relevant to their concept is received, and remains otherwise unchanged. The keys used in the addressing/gating mechanism also correspond to concepts or entities, but are modified only during learning, not during inference. https://github.com/facebook/MemNN/tree/master/EntNet-babi

https://arxiv.org/abs/1612.04426 Improving Neural Language Models with a Continuous Cache

We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.

https://arxiv.org/abs/1702.04521 Frustratingly Short Attention Spans in Neural Language Modeling

This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models.

https://arxiv.org/abs/1702.04649 Generative Temporal Models with Memory

We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models' operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.

http://arxiv.org/pdf/1702.08360v1.pdf Neural Map: Structured Memory for Deep Reinforcement Learning

This architecture, called the Neural Map, uses a spatially structured 2D memory image to learn to store arbitrary information about the environment over long time lags. We demonstrate empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show that it is capable of generalizing to environments that were not seen during training.

https://arxiv.org/abs/1703.00729v1 Mixing Complexity and its Applications to Neural Networks

We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of space-time tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.

Another possible explanation is that problems of interest are not mixing and have a great deal of “structure” to them. We suggested using the notion of rsufficient partitions to formalize the notion of “structure”. We showed that classes that have such a partition are not mixing.

https://arxiv.org/abs/1610.08613v2 Can Active Memory Replace Attention?

So far, however, active memory has not improved over attention for most natural language processing tasks, in particular for machine translation. We analyze this shortcoming in this paper and propose an extended model of active memory that matches existing attention models on neural machine translation and generalizes better to longer sentences. We investigate this model and explain why previous active memory models did not succeed. Finally, we discuss when active memory brings most benefits and where attention can be a better choice.

https://arxiv.org/abs/1805.00327v1 A Taxonomy for Neural Memory Networks

https://arxiv.org/abs/1807.06473 Contextual Memory Trees

We design and study a Contextual Memory Tree (CMT), a learning memory controller that inserts new memories into an experience store of unbounded size. It is designed to efficiently query for memories from that store, supporting logarithmic time insertion and retrieval operations. Hence CMT can be integrated into existing statistical learning algorithms as an augmented memory unit without substantially increasing training and inference computation. We demonstrate the efficacy of CMT by augmenting existing multi-class and multi-label classification algorithms with CMT and observe statistical improvement. We also test CMT learning on several image-captioning tasks to demonstrate that it performs computationally better than a simple nearest neighbors memory system while benefitting from reward learning.

https://arxiv.org/abs/1808.09357v1 Rational Recurrences

Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state automata (WFSAs), leading to new interpretations and insights. In this work, we show that some recurrent neural networks also share this connection to WFSAs. We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs. We show that several recent neural models use rational recurrences. Our analysis provides a fresh view of these models and facilitates devising new neural architectures that draw inspiration from WFSAs. We present one such model, which performs better than two recent baselines on language modeling and text classification. Our results demonstrate that transferring intuitions from classical models like WFSAs can be an effective approach to designing and understanding neural models.

https://arxiv.org/abs/1810.06721 Optimizing Agent Behavior over Long Time Scales by Transporting Value

we introduce a new paradigm for reinforcement learning where agents use recall of specific memories to credit actions from the past, allowing them to solve problems that are intractable for existing algorithms. This paradigm broadens the scope of problems that can be investigated in AI and offers a mechanistic account of behaviors that may inspire computational models in neuroscience, psychology, and behavioral economics.

Temporal Value Transport is a heuristic algorithm but one that expresses coherent principles we believe will endure: past events are encoded, stored, retrieved, and revaluated. TVT fundamentally intertwines memory systems and reinforcement learning: the attention weights on memories specifically modulate the reward credited to past events.

https://arxiv.org/pdf/1810.05017.pdf ONE-SHOT HIGH-FIDELITY IMITATION: TRAINING LARGE-SCALE DEEP NETS WITH RL

In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL.