Cost-effective on-demand associative author name disambiguation
► Rules are extracted on demand, based only on the most suitable examples. ► Our self-training solution drastically reduces the amount of examples required. ► We are able detect novel/unseen authors in the test set. ► Gains from 12% to more than 400% were obtained against state-of-the-art methods. A...
Saved in:
Published in | Information processing & management Vol. 48; no. 4; pp. 680 - 697 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Kidlington
Elsevier Ltd
01.07.2012
Elsevier Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ► Rules are extracted on demand, based only on the most suitable examples. ► Our self-training solution drastically reduces the amount of examples required. ► We are able detect novel/unseen authors in the test set. ► Gains from 12% to more than 400% were obtained against state-of-the-art methods.
Authorship disambiguation is an urgent issue that affects the quality of digital library services and for which supervised solutions have been proposed, delivering state-of-the-art effectiveness. However, particular challenges such as the prohibitive cost of labeling vast amounts of examples (there are many ambiguous authors), the huge hypothesis space (there are several features and authors from which many different disambiguation functions may be derived), and the skewed author popularity distribution (few authors are very prolific, while most appear in only few citations), may prevent the full potential of such techniques. In this article, we introduce an associative author name disambiguation approach that identifies authorship by extracting, from training examples, rules associating citation features (e.g., coauthor names, work title, publication venue) to specific authors. As our main contribution we propose three associative author name disambiguators: (1) EAND (Eager Associative Name Disambiguation), our basic method that explores association rules for name disambiguation; (2) LAND (Lazy Associative Name Disambiguation), that extracts rules on a demand-driven basis at disambiguation time, reducing the hypothesis space by focusing on examples that are most suitable for the task; and (3) SLAND (Self-Training LAND), that extends LAND with self-training capabilities, thus drastically reducing the amount of examples required for building effective disambiguation functions, besides being able to detect novel/unseen authors in the test set. Experiments demonstrate that all our disambigutators are effective and that, in particular, SLAND is able to outperform state-of-the-art supervised disambiguators, providing gains that range from 12% to more than 400%, being extremely effective and practical. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
ISSN: | 0306-4573 1873-5371 |
DOI: | 10.1016/j.ipm.2011.08.005 |