https://arxiv.org/abs/1603.06042 Globally Normalized Transition-Based Neural Networks

We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.

Our results demonstrate that feed-forward network without recurrence can outperform recurrent models such as LSTMs when they are trained with global normalization. We further support our empirical findings with a proof showing that global normalization helps the model overcome the label bias problem from which locally normalized models suffer.

https://arxiv.org/abs/1603.01360 Neural Architectures for Named Entity Recognition

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures—one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.

https://github.com/clab/stack-lstm-ner

https://pdfs.semanticscholar.org/9361/db9a878dc9286cfbefd8a95aa2b4383abe9d.pdf Inductive Dependency Parsing

https://arxiv.org/abs/1603.06598 Stack-propagation: Improved Representation Learning for Syntax

https://www.youtube.com/watch?v=-QWYaPAltNs Word Representation Learning without unk Assumptions