Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
attention [2018/04/23 18:36]
admin
attention [2018/10/02 19:57] (current)
admin
Line 248: Line 248:
  
 In this paper, we propose a novel end-to-end neural model for the NLVR task, where we first use joint bidirectional attention to build a two-way conditioning between the visual information and the language phrases. Next, we use an RL-based pointer network to sort and process the varying number of unordered objects (so as to match the order of the statement phrases) in each of the three images and then pool over the three decisions. Our model achieves strong improvements (of 4-6% absolute) over the state-of-the-art on both the structured representation and raw image versions of the dataset. In this paper, we propose a novel end-to-end neural model for the NLVR task, where we first use joint bidirectional attention to build a two-way conditioning between the visual information and the language phrases. Next, we use an RL-based pointer network to sort and process the varying number of unordered objects (so as to match the order of the statement phrases) in each of the three images and then pool over the three decisions. Our model achieves strong improvements (of 4-6% absolute) over the state-of-the-art on both the structured representation and raw image versions of the dataset.
 +
 +https://​openreview.net/​pdf?​id=S1Euwz-Rb COMPOSITIONAL ATTENTION NETWORKS
 +FOR MACHINE REASONING
 +
 +https://​arxiv.org/​abs/​1804.09849v2 The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
 +
 +In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture,​ yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English to French and English to German tasks. ​
 +
 +https://​arxiv.org/​abs/​1805.04174 Joint Embedding of Words and Labels for Text Classification
 +
 +Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences. We propose to view text classification as a label-word joint embedding problem: each label is embedded in the same space with the word vectors. We introduce an attention framework that measures the compatibility of embeddings between text sequences and labels. The attention is learned on a training set of labeled samples to ensure that, given a text sequence, the relevant words are weighted higher than the irrelevant ones. Our method maintains the interpretability of word embeddings, and enjoys a built-in ability to leverage alternative sources of information,​ in addition to input text sequences. Extensive results on the several large text datasets show that the proposed framework outperforms the state-of-the-art methods by a large margin, in terms of both accuracy and speed.
 +
 +https://​arxiv.org/​abs/​1805.09786v1 Hyperbolic Attention Networks
 +
 +We extend this line of work by imposing hyperbolic geometry on the activations of neural networks. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Our method shows improvements in terms of generalization on neural machine translation,​ learning on graphs and visual question answering tasks while keeping the neural representations compact.
 +
 +https://​arxiv.org/​abs/​1806.01830 Relational Deep Reinforcement Learning
 +
 +https://​www.mitpressjournals.org/​doi/​abs/​10.1162/​isal_a_00095 Being curious about the answers to questions: novelty search with learned attention
 +
 +
 +https://​arxiv.org/​abs/​1802.04712v4 Attention-based Deep Multiple Instance Learning
 +
 +Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore,​ we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.
 +
 +https://​arxiv.org/​abs/​1808.00300 Learning Visual Question Answering by Bootstrapping Hard Attention
 +
 +we introduce a new approach for hard attention and find it achieves very competitive performance on a recently-released visual question answering datasets, equalling and in some cases surpassing similar soft attention architectures while entirely ignoring some features. Even though the hard attention mechanism is thought to be non-differentiable,​ we found that the feature magnitudes correlate with semantic relevance, and provide a useful signal for our mechanism'​s attentional selection criterion. Because hard attention selects important features of the input information,​ it can also be more efficient than analogous soft attention mechanisms. ​
 +
 +https://​arxiv.org/​abs/​1808.03728v1 Hierarchical Attention: What Really Counts in Various NLP Tasks
 +
 +https://​arxiv.org/​abs/​1807.03819 Universal Transformers
 +
 +nstead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation,​ we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation,​ and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.
 +
 +https://​arxiv.org/​abs/​1808.03867 Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction
 +
 +Current state-of-the-art machine translation systems are based on encoder-decoder architectures,​ that first encode the input sequence, and then generate an output sequence based on the input encoding. Both are interfaced with an attention mechanism that recombines a fixed encoding of the source tokens based on the decoder state. We propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Our model yields excellent results, outperforming state-of-the-art encoder-decoder systems, while being conceptually simpler and having fewer parameters.
 +
 +https://​arxiv.org/​abs/​1808.05578 LARNN: Linear Attention Recurrent Neural Network
 +
 +The Linear Attention Recurrent Neural Network (LARNN) is a recurrent attention module derived from the Long Short-Term Memory (LSTM) cell and ideas from the consciousness Recurrent Neural Network (RNN). Yes, it LARNNs. The LARNN uses attention on its past cell state values for a limited window size k. The formulas are also derived from the Batch Normalized LSTM (BN-LSTM) cell and the Transformer Network for its Multi-Head Attention Mechanism. The Multi-Head Attention Mechanism is used inside the cell such that it can query its own k past values with the attention window. https://​github.com/​guillaume-chevalier/​Linear-Attention-Recurrent-Neural-Network
 +
 +http://​petar-v.com/​GAT/​ Graph Attention Networks
 +
 +https://​arxiv.org/​abs/​1808.04444 Character-Level Language Modeling with Deeper Self-Attention
 +
 +. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks- 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.
 +
 +https://​arxiv.org/​abs/​1808.08946v2 Why Self-Attention?​ A Targeted Evaluation of Neural Machine Translation Architectures
 +
 +Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation.
 +
 +https://​arxiv.org/​abs/​1808.03867 Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction
 +
 +https://​arxiv.org/​abs/​1809.11087 Learning to Remember, Forget and Ignore using Attention Control in Memory
 +
 +Applying knowledge gained from psychological studies, we designed a new model called Differentiable Working Memory (DWM) in order to specifically emulate human working memory. As it shows the same functional characteristics as working memory, it robustly learns psychology inspired tasks and converges faster than comparable state-of-the-art models. Moreover, the DWM model successfully generalizes to sequences two orders of magnitude longer than the ones used in training. Our in-depth analysis shows that the behavior of DWM is interpretable and that it learns to have fine control over memory, allowing it to retain, ignore or forget information based on its relevance.
 +
 +https://​openreview.net/​forum?​id=rJxHsjRqFQ Hyperbolic Attention Networks ​
 +
 +By only changing the geometry of embedding of object representations,​ we can use the embedding space more efficiently without increasing the number of parameters of the model. Mainly as the number of objects grows exponentially for any semantic distance from the query, hyperbolic geometry ​ --as opposed to Euclidean geometry-- can encode those objects without having any interference. Our method shows improvements in generalization on neural machine translation on WMT'14 (English to German), learning on graphs (both on synthetic and real-world graph tasks) and visual question answering (CLEVR) tasks while keeping the neural representations compact.