Weight prediction and recognition of latent subject terms based on the fusion of explicit & implicit information about keyword

Weight prediction and recognition of subject term in documents is widely used in application fields such as literature recommendation and keyword retrieval. Unlike traditional methods which only focus on searching for subject terms from existing keywords in documents, this paper proposes a recogniti...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 126; p. 107161
Main Authors Li, Shuqing, Jiang, Mingfeng, Jiang, Weiwei, Huang, Jingwang, Zhang, Hu, Zhang, Zhiwang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Weight prediction and recognition of subject term in documents is widely used in application fields such as literature recommendation and keyword retrieval. Unlike traditional methods which only focus on searching for subject terms from existing keywords in documents, this paper proposes a recognition method of latent subject term outside documents based on a one-class collaborative filtering algorithm that fuses explicit and implicit information. A matrix factorization model based on the analysis of document activity and subject term popularity is constructed to measure the correlation probability between documents and subject terms that do not appear in the current document. After these subject terms being divided into Latent Subject Terms (LST) and Irrelevant Subject Terms (IST), two methods for predicting the weight of these types of subject terms is introduced, which include Hybrid Filling with Preference Coefficients (HFPC), and Zero Filling. In order to verify the effectiveness of subject term recognition, many collaborative filtering recommendation algorithms are conducted with the filled document keyword matrix. On the basis of not changing these algorithms, MAE and FCP can be improved by 28.01% and 22.79% at most, while P@N and NDCG@N can be improved by 22.37% and 27.06%, respectively.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2023.107161