Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction

We address part-of-speech (POS) induction by maximizing the mutual information between the induced label and its context. We focus on two training objectives that are amenable to stochastic gradient descent (SGD): a novel generalization of the classical Brown clustering objective and a recently prop...

Full description

Saved in:
Bibliographic Details
Main Author Stratos, Karl
Format Journal Article
LanguageEnglish
Published 20.04.2018
Subjects
Online AccessGet full text
DOI10.48550/arxiv.1804.07849

Cover

Loading…
More Information
Summary:We address part-of-speech (POS) induction by maximizing the mutual information between the induced label and its context. We focus on two training objectives that are amenable to stochastic gradient descent (SGD): a novel generalization of the classical Brown clustering objective and a recently proposed variational lower bound. While both objectives are subject to noise in gradient updates, we show through analysis and experiments that the variational lower bound is robust whereas the generalized Brown objective is vulnerable. We obtain competitive performance on a multitude of datasets and languages with a simple architecture that encodes morphology and context.
DOI:10.48550/arxiv.1804.07849