Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
context [2018/06/16 20:11]
admin
context [2019/01/16 17:11] (current)
admin
Line 33: Line 33:
  
 https://​arxiv.org/​pdf/​1703.06408v1.pdf Multilevel Context Representation for Improving Object Recognition https://​arxiv.org/​pdf/​1703.06408v1.pdf Multilevel Context Representation for Improving Object Recognition
 +
  
 This paper This paper
Line 209: Line 210:
  
 https://​arxiv.org/​abs/​1612.08083v3 Language Modeling with Gated Convolutional Networks https://​arxiv.org/​abs/​1612.08083v3 Language Modeling with Gated Convolutional Networks
 +
 +https://​arxiv.org/​abs/​1711.06640v2 Neural Motifs: Scene Graph Parsing with Global Context
 +
 +Our analysis motivates a new baseline: given object detections, predict the most frequent relation between object pairs with the given labels, as seen in the training set. This baseline improves on the previous state-of-the-art by an average of 3.6% relative improvement across evaluation settings. We then introduce Stacked Motif Networks, a new architecture designed to capture higher order motifs in scene graphs that further improves over our strong baseline by an average 7.1% relative gain. Our code is available at github.com/​rowanz/​neural-motifs.
 +
 +https://​arxiv.org/​abs/​1808.08493 Contextual Parameter Generation for Universal Neural Machine Translation
 +
 +https://​arxiv.org/​abs/​1809.01997 Dual Ask-Answer Network for Machine Reading Comprehension
 +
 +https://​openreview.net/​forum?​id=BylBfnRqFm CAML: Fast Context Adaptation via Meta-Learning
 +
 +https://​arxiv.org/​pdf/​1810.03642v1.pdf CAML: Fast Context Adaptation via Meta-Learning
 +
 +
 +CAML: Fast Context Adaptation via Meta-Learning
 +Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson
 +(Submitted on 8 Oct 2018 (this version), latest version 12 Oct 2018 (v2))
 +We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks.
 +
 +https://​arxiv.org/​abs/​1810.08135 Contextual Topic Modeling For Dialog Systems
 +
 +Our work for detecting conversation topics and keywords can be used to guide chatbots towards coherent dialog.
 +
 +https://​www.nature.com/​articles/​s41467-018-06781-2 Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences
 +
 +https://​arxiv.org/​abs/​1901.03415v1 Context Aware Machine Learning
 +
 +The embedding of an observation can also be decomposed into a weighted sum of two vectors, representing its context-free and context-sensitive parts
 +
 + new architecture for modeling attention in deep neural networks. More surprisingly,​ our new principle provides a novel understanding of the gates and equations defined by the long short term memory model, which also leads to a new model that is able to converge significantly faster and achieve much lower prediction errors. Furthermore,​ our principle also inspires a new type of generic neural network layer that better resembles real biological neurons than the traditional linear mapping plus nonlinear activation based architecture. Its multi-layer extension provides a new principle for deep neural networks which subsumes residual network (ResNet) as its special case, and its extension to convolutional neutral network model accounts for irrelevant input (e.g., background in an image) in addition to filtering.
 +