Product Description and QA Assisted Self-Supervised Opinion Summarization

In e-commerce, opinion summarization is the process of summarizing the consensus opinions found in product reviews. However, the potential of additional sources such as product description and question-answers (QA) has been considered less often. Moreover, the absence of any supervised training data...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Siledar, Tejpalsingh, Rangaraju, Rupasai, Sankara Sri Raghava Ravindra Muddu, Banerjee, Suman, Patil, Amey, Sudhanshu Shekhar Singh, Muthusamy Chelliah, Garera, Nikesh, Nath, Swaprava, Bhattacharyya, Pushpak
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 08.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In e-commerce, opinion summarization is the process of summarizing the consensus opinions found in product reviews. However, the potential of additional sources such as product description and question-answers (QA) has been considered less often. Moreover, the absence of any supervised training data makes this task challenging. To address this, we propose a novel synthetic dataset creation (SDC) strategy that leverages information from reviews as well as additional sources for selecting one of the reviews as a pseudo-summary to enable supervised training. Our Multi-Encoder Decoder framework for Opinion Summarization (MEDOS) employs a separate encoder for each source, enabling effective selection of information while generating the summary. For evaluation, due to the unavailability of test sets with additional sources, we extend the Amazon, Oposum+, and Flipkart test sets and leverage ChatGPT to annotate summaries. Experiments across nine test sets demonstrate that the combination of our SDC approach and MEDOS model achieves on average a 14.5% improvement in ROUGE-1 F1 over the SOTA. Moreover, comparative analysis underlines the significance of incorporating additional sources for generating more informative summaries. Human evaluations further indicate that MEDOS scores relatively higher in coherence and fluency with 0.41 and 0.5 (-1 to 1) respectively, compared to existing models. To the best of our knowledge, we are the first to generate opinion summaries leveraging additional sources in a self-supervised setting.
ISSN:2331-8422