Approximating CKY with Transformers

We investigate the ability of transformer models to approximate the CKY algorithm, using them to directly predict a sentence's parse and thus avoid the CKY algorithm's cubic dependence on sentence length. We find that on standard constituency parsing benchmarks this approach achieves compe...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ghazal Khalighinejad, Liu, Ollie, Wiseman, Sam
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 05.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We investigate the ability of transformer models to approximate the CKY algorithm, using them to directly predict a sentence's parse and thus avoid the CKY algorithm's cubic dependence on sentence length. We find that on standard constituency parsing benchmarks this approach achieves competitive or better performance than comparable parsers that make use of CKY, while being faster. We also evaluate the viability of this approach for parsing under \textit{random} PCFGs. Here we find that performance declines as the grammar becomes more ambiguous, suggesting that the transformer is not fully capturing the CKY computation. However, we also find that incorporating additional inductive bias is helpful, and we propose a novel approach that makes use of gradients with respect to chart representations in predicting the parse, in analogy with the CKY algorithm being a subgradient of a partition function variant with respect to the chart.
ISSN:2331-8422