Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
context [2018/04/15 13:41]
admin
context [2019/01/16 17:11] (current)
admin
Line 33: Line 33:
  
 https://​arxiv.org/​pdf/​1703.06408v1.pdf Multilevel Context Representation for Improving Object Recognition https://​arxiv.org/​pdf/​1703.06408v1.pdf Multilevel Context Representation for Improving Object Recognition
 +
  
 This paper This paper
Line 199: Line 200:
  
 This paper introduces Contextual Salience (CoSal), a simple and explicit measure of a word's importance in context which is a more theoretically natural, practically simpler, and more accurate replacement to tf-idf. CoSal supports very small contexts (20 or more sentences), out-of context words, and is easy to calculate. A word vector space generated with both bigram phrases and unigram tokens reveals that contextually significant words disproportionately define phrases. This relationship is applied to produce simple weighted bag-of-words sentence embeddings. This model outperforms SkipThought and the best models trained on unordered sentences in most tests in Facebook'​s SentEval, beats tf-idf on all available tests, and is generally comparable to the state of the art. This paper also applies CoSal to sentence and document summarization and an improved and context-aware cosine distance. Applying the premise that unexpected words are important, CoSal is presented as a replacement for tf-idf and an intuitive measure of contextual word importance. This paper introduces Contextual Salience (CoSal), a simple and explicit measure of a word's importance in context which is a more theoretically natural, practically simpler, and more accurate replacement to tf-idf. CoSal supports very small contexts (20 or more sentences), out-of context words, and is easy to calculate. A word vector space generated with both bigram phrases and unigram tokens reveals that contextually significant words disproportionately define phrases. This relationship is applied to produce simple weighted bag-of-words sentence embeddings. This model outperforms SkipThought and the best models trained on unordered sentences in most tests in Facebook'​s SentEval, beats tf-idf on all available tests, and is generally comparable to the state of the art. This paper also applies CoSal to sentence and document summarization and an improved and context-aware cosine distance. Applying the premise that unexpected words are important, CoSal is presented as a replacement for tf-idf and an intuitive measure of contextual word importance.
 +
 +https://​arxiv.org/​abs/​1804.07983v1 Context-Attentive Embeddings for Improved Sentence Representations
 +
 +While one of the first steps in many NLP systems is selecting what embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce a novel, straightforward yet highly effective method for combining multiple types of word embeddings in a single model, leading to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new insight into the usage of word embeddings in NLP systems.
 +
 +https://​arxiv.org/​abs/​1712.09926 Rapid Adaptation with Conditionally Shifted Neurons
 +
 +We describe a mechanism by which artificial neural networks can learn rapid adaptation - the ability to adapt on the fly, with little data, to new tasks - that we call conditionally shifted neurons. We apply this mechanism in the framework of metalearning,​ where the aim is to replicate some of the flexibility of human learning in machines. Conditionally shifted neurons modify their activation values with task-specific shifts retrieved from a memory module, which is populated rapidly based on limited task experience. On metalearning benchmarks from the vision and language domains, models augmented with conditionally shifted neurons achieve state-of-the-art results.
 +
 +https://​arxiv.org/​abs/​1612.08083v3 Language Modeling with Gated Convolutional Networks
 +
 +https://​arxiv.org/​abs/​1711.06640v2 Neural Motifs: Scene Graph Parsing with Global Context
 +
 +Our analysis motivates a new baseline: given object detections, predict the most frequent relation between object pairs with the given labels, as seen in the training set. This baseline improves on the previous state-of-the-art by an average of 3.6% relative improvement across evaluation settings. We then introduce Stacked Motif Networks, a new architecture designed to capture higher order motifs in scene graphs that further improves over our strong baseline by an average 7.1% relative gain. Our code is available at github.com/​rowanz/​neural-motifs.
 +
 +https://​arxiv.org/​abs/​1808.08493 Contextual Parameter Generation for Universal Neural Machine Translation
 +
 +https://​arxiv.org/​abs/​1809.01997 Dual Ask-Answer Network for Machine Reading Comprehension
 +
 +https://​openreview.net/​forum?​id=BylBfnRqFm CAML: Fast Context Adaptation via Meta-Learning
 +
 +https://​arxiv.org/​pdf/​1810.03642v1.pdf CAML: Fast Context Adaptation via Meta-Learning
 +
 +
 +CAML: Fast Context Adaptation via Meta-Learning
 +Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson
 +(Submitted on 8 Oct 2018 (this version), latest version 12 Oct 2018 (v2))
 +We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks.
 +
 +https://​arxiv.org/​abs/​1810.08135 Contextual Topic Modeling For Dialog Systems
 +
 +Our work for detecting conversation topics and keywords can be used to guide chatbots towards coherent dialog.
 +
 +https://​www.nature.com/​articles/​s41467-018-06781-2 Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences
 +
 +https://​arxiv.org/​abs/​1901.03415v1 Context Aware Machine Learning
 +
 +The embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts
 +
 + new architecture for modeling attention in deep neural networks. More surprisingly,​ our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore,​ our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
 +
 +