Earnings-21: A Practical Benchmark for ASR in the Wild

Commonly used speech corpora inadequately challenge academic and commercial ASR systems. In particular, speech corpora lack metadata needed for detailed analysis and WER measurement. In response, we present Earnings-21, a 39-hour corpus of earnings calls containing entity-dense speech from nine diff...

Full description

Saved in:
Bibliographic Details
Main Authors Del Rio, Miguel, Delworth, Natalie, Westerman, Ryan, Huang, Michelle, Bhandari, Nishchal, Palakapilly, Joseph, McNamara, Quinten, Dong, Joshua, Zelasko, Piotr, Jette, Miguel
Format Journal Article
LanguageEnglish
Published 22.04.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Commonly used speech corpora inadequately challenge academic and commercial ASR systems. In particular, speech corpora lack metadata needed for detailed analysis and WER measurement. In response, we present Earnings-21, a 39-hour corpus of earnings calls containing entity-dense speech from nine different financial sectors. This corpus is intended to benchmark ASR systems in the wild with special attention towards named entity recognition. We benchmark four commercial ASR models, two internal models built with open-source tools, and an open-source LibriSpeech model and discuss their differences in performance on Earnings-21. Using our recently released fstalign tool, we provide a candid analysis of each model's recognition capabilities under different partitions. Our analysis finds that ASR accuracy for certain NER categories is poor, presenting a significant impediment to transcript comprehension and usage. Earnings-21 bridges academic and commercial ASR system evaluation and enables further research on entity modeling and WER on real world audio.
DOI:10.48550/arxiv.2104.11348