Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction
We address part-of-speech (POS) induction by maximizing the mutual information between the induced label and its context. We focus on two training objectives that are amenable to stochastic gradient descent (SGD): a novel generalization of the classical Brown clustering objective and a recently prop...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
20.04.2018
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1804.07849 |
Cover
Loading…
Summary: | We address part-of-speech (POS) induction by maximizing the mutual
information between the induced label and its context. We focus on two training
objectives that are amenable to stochastic gradient descent (SGD): a novel
generalization of the classical Brown clustering objective and a recently
proposed variational lower bound. While both objectives are subject to noise in
gradient updates, we show through analysis and experiments that the variational
lower bound is robust whereas the generalized Brown objective is vulnerable. We
obtain competitive performance on a multitude of datasets and languages with a
simple architecture that encodes morphology and context. |
---|---|
DOI: | 10.48550/arxiv.1804.07849 |