A Hybrid Deep Neural Network for Multimodal Personalized Hashtag Recommendation

Users share information on social media platforms by posting visual and textual contents. Due to the massive influx of user-generated content, hashtags are extensively used to manage, organize, and categorize the content. Despite the usability of hashtags, many social media users refrain from assign...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on computational social systems Vol. 10; no. 5; pp. 2439 - 2459
Main Authors Bansal, Shubhi, Gowda, Kushaan, Kumar, Nagendra
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.10.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Users share information on social media platforms by posting visual and textual contents. Due to the massive influx of user-generated content, hashtags are extensively used to manage, organize, and categorize the content. Despite the usability of hashtags, many social media users refrain from assigning hashtags to their posts owing to the uncertainty in choosing appropriate hashtags. Several methods have been proposed to recommend hashtags using content-based information. However, multimodality and personalization aspects of hashtag recommendation have rarely been addressed. In light of the above, we propose a multimoDal pErSonalIzed hashtaG recommeNdation (DESIGN) method that incorporates relevant information embedded in textual and visual modalities of social media posts and models user interests to recommend a plausible set of hashtags. We use word-level attention (WA) on the textual modality followed by a parallel co-attention (PCA) mechanism to model the interaction between textual and visual modalities. Unlike the existing works, we present a hybrid deep neural network that capitalizes hashtags from multilabel classification (MLC) and sequence generation (SG) to recommend candidate hashtags for social media posts. We perform our experiments on social media datasets containing textual, visual, and user information. Experimental results show that the proposed method outperforms the state-of-the-art methods.
ISSN:2329-924X
2329-924X
2373-7476
DOI:10.1109/TCSS.2022.3184307