How Well Do Automated Linking Methods Perform? Lessons from U.S. Historical Data

This paper reviews the literature in historical record linkage in the U.S. and examines the performance of widely-used record linking algorithms and common variations in their assumptions. We use two high-quality, hand-linked datasets and one synthetic ground truth to examine the direct effects of l...

Full description

Saved in:
Bibliographic Details
Published inJournal of economic literature Vol. 58; no. 4; p. 997
Main Authors Bailey, Martha, Cole, Connor, Henderson, Morgan, Massey, Catherine
Format Journal Article
LanguageEnglish
Published United States 01.12.2020
Online AccessGet more information

Cover

Loading…
More Information
Summary:This paper reviews the literature in historical record linkage in the U.S. and examines the performance of widely-used record linking algorithms and common variations in their assumptions. We use two high-quality, hand-linked datasets and one synthetic ground truth to examine the direct effects of linking algorithms on data quality. We find that (1) no algorithm (including hand-linking) consistently produces representative samples; (2) 15 to 37 percent of links chosen by widely-used algorithms are classified as errors by trained human reviewers; and (3) false links are systematically related to baseline sample characteristics, showing that some algorithms may induce systematic measurement error into analyses. A case study shows that the combined effects of (1)-(3) attenuate estimates of the intergenerational income elasticity by up to 20 percent, and common variations in algorithm assumptions result in greater attenuation. As current practice moves to automate linking and increase link rates, these results highlight the important potential consequences of linking errors on inferences with linked data. We conclude with constructive suggestions for reducing linking errors and directions for future research.
ISSN:0022-0515
DOI:10.1257/JEL.20191526