Leveraging Text Repetitions and Denoising Autoencoders in OCR Post-correction
A common approach for improving OCR quality is a post-processing step based on models correcting misdetected characters and tokens. These models are typically trained on aligned pairs of OCR read text and their manually corrected counterparts. In this paper we show that the requirement of manually c...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.06.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A common approach for improving OCR quality is a post-processing step based
on models correcting misdetected characters and tokens. These models are
typically trained on aligned pairs of OCR read text and their manually
corrected counterparts. In this paper we show that the requirement of manually
corrected training data can be alleviated by estimating the OCR errors from
repeating text spans found in large OCR read text corpora and generating
synthetic training examples following this error distribution. We use the
generated data for training a character-level neural seq2seq model and evaluate
the performance of the suggested model on a manually corrected corpus of
Finnish newspapers mostly from the 19th century. The results show that a clear
improvement over the underlying OCR system as well as previously suggested
models utilizing uniformly generated noise can be achieved. |
---|---|
DOI: | 10.48550/arxiv.1906.10907 |