Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
attention [2018/08/19 23:27]
admin
attention [2018/10/02 19:56]
admin
Line 279: Line 279:
 https://​arxiv.org/​abs/​1808.03728v1 Hierarchical Attention: What Really Counts in Various NLP Tasks https://​arxiv.org/​abs/​1808.03728v1 Hierarchical Attention: What Really Counts in Various NLP Tasks
  
 +https://​arxiv.org/​abs/​1807.03819 Universal Transformers
 +
 +nstead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation,​ we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation,​ and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.
 +
 +https://​arxiv.org/​abs/​1808.03867 Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction
 +
 +Current state-of-the-art machine translation systems are based on encoder-decoder architectures,​ that first encode the input sequence, and then generate an output sequence based on the input encoding. Both are interfaced with an attention mechanism that recombines a fixed encoding of the source tokens based on the decoder state. We propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Our model yields excellent results, outperforming state-of-the-art encoder-decoder systems, while being conceptually simpler and having fewer parameters.
 +
 +https://​arxiv.org/​abs/​1808.05578 LARNN: Linear Attention Recurrent Neural Network
 +
 +The Linear Attention Recurrent Neural Network (LARNN) is a recurrent attention module derived from the Long Short-Term Memory (LSTM) cell and ideas from the consciousness Recurrent Neural Network (RNN). Yes, it LARNNs. The LARNN uses attention on its past cell state values for a limited window size k. The formulas are also derived from the Batch Normalized LSTM (BN-LSTM) cell and the Transformer Network for its Multi-Head Attention Mechanism. The Multi-Head Attention Mechanism is used inside the cell such that it can query its own k past values with the attention window. https://​github.com/​guillaume-chevalier/​Linear-Attention-Recurrent-Neural-Network
 +
 +http://​petar-v.com/​GAT/​ Graph Attention Networks
 +
 +https://​arxiv.org/​abs/​1808.04444 Character-Level Language Modeling with Deeper Self-Attention
 +
 +. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks- 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.
 +
 +https://​arxiv.org/​abs/​1808.08946v2 Why Self-Attention?​ A Targeted Evaluation of Neural Machine Translation Architectures
 +
 +Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation.
 +
 +https://​arxiv.org/​abs/​1808.03867 Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction
 +
 +https://​arxiv.org/​abs/​1809.11087 Learning to Remember, Forget and Ignore using Attention Control in Memory
 +
 +Applying knowledge gained from psychological studies, we designed a new model called Differentiable Working Memory (DWM) in order to specifically emulate human working memory. As it shows the same functional characteristics as working memory, it robustly learns psychology inspired tasks and converges faster than comparable state-of-the-art models. Moreover, the DWM model successfully generalizes to sequences two orders of magnitude longer than the ones used in training. Our in-depth analysis shows that the behavior of DWM is interpretable and that it learns to have fine control over memory, allowing it to retain, ignore or forget information based on its relevance.
 +
 +https://​openreview.net/​forum?​id=rJxHsjRqFQ Hyperbolic Attention Networks ​