COLTR: Semi-supervised Learning to Rank with Co-training and Over-parameterization for Web Search

While learning to rank (LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs wit...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on knowledge and data engineering Vol. 35; no. 12; pp. 1 - 14
Main Authors Li, Yuchen, Xiong, Haoyi, Wang, Qingzhong, Kong, Linghe, Liu, Hao, Li, Haifang, Bian, Jiang, Wang, Shuaiqiang, Chen, Guihai, Dou, Dejing, Yin, Dawei
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While learning to rank (LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs with ranking scores to cover search queries of various popularity, and 2) ill-trained models based on a limited number of training samples with poor generalization performance. To improve the performance of LTR models, tremendous efforts have been done from above two aspects, such as enlarging training sets with pseudo-labels of ranking scores by self-training, or refining the features used for LTR through feature extraction and dimension reduction. Though LTR performance has been marginally increased, we still believe these methods could be further improved in the newly-fashioned "interpolating regime". Specifically, instead of lowering the number of features used for LTR models, our work proposes to transform original data with random Fourier feature, so as to over-parameterize the downstream LTR models (e.g., GBRank or LightGBM) with features in ultra-high dimensionality and achieve superb generalization performance. Furthermore, rather than self-training with pseudo-labels produced by the same LTR model in a "self-tuned" fashion, the proposed method incorporates the diversity of prediction results between the listwise and pointwise LTR models while co-training both models with a cyclic labeling-prediction pipeline in a "ping-pong" manner. We deploy the proposed C o-trained and O ver-parameterized LTR system COLTR at Baidu search and evaluate COLTR with a large number of baseline methods. The results show that COLTR could achieve <inline-formula><tex-math notation="LaTeX">\Delta NDCG_{4}</tex-math></inline-formula>=3.64%<inline-formula><tex-math notation="LaTeX">\sim</tex-math></inline-formula>4.92%, compared to baselines, under various ratios of labeled samples. We also conduct a 7-day A/B Test using the realistic web traffics of Baidu Search, where we can still observe significant performance improvement around <inline-formula><tex-math notation="LaTeX">\Delta NDCG_{4}</tex-math></inline-formula>=0.17%<inline-formula><tex-math notation="LaTeX">\sim</tex-math></inline-formula>0.92% in real-world applications. COLTR performs consistently both in online and offline experiments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2023.3270750