Learning to Retrieve Passages without Supervision
Dense retrievers for open-domain question answering (ODQA) have been shown to achieve impressive performance by training on large datasets of question-passage pairs. In this work we ask whether this dependence on labeled data can be reduced via unsupervised pretraining that is geared towards ODQA. W...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Dense retrievers for open-domain question answering (ODQA) have been shown to
achieve impressive performance by training on large datasets of
question-passage pairs. In this work we ask whether this dependence on labeled
data can be reduced via unsupervised pretraining that is geared towards ODQA.
We show this is in fact possible, via a novel pretraining scheme designed for
retrieval. Our "recurring span retrieval" approach uses recurring spans across
passages in a document to create pseudo examples for contrastive learning. Our
pretraining scheme directly controls for term overlap across pseudo queries and
relevant passages, thus allowing to model both lexical and semantic relations
between them. The resulting model, named Spider, performs surprisingly well
without any labeled training examples on a wide range of ODQA datasets.
Specifically, it significantly outperforms all other pretrained baselines in a
zero-shot setting, and is competitive with BM25, a strong sparse baseline.
Moreover, a hybrid retriever over Spider and BM25 improves over both, and is
often competitive with DPR models, which are trained on tens of thousands of
examples. Last, notable gains are observed when using Spider as an
initialization for supervised training. |
---|---|
DOI: | 10.48550/arxiv.2112.07708 |