Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis, which aims to extract the aspects and predict their sentiments. Most existing studies focus on improving the performance of the target domain by fine-tuning domain-specific models (trained on source domains) based...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment
analysis, which aims to extract the aspects and predict their sentiments. Most
existing studies focus on improving the performance of the target domain by
fine-tuning domain-specific models (trained on source domains) based on the
target domain dataset. Few works propose continual learning tasks for ABSA,
which aim to learn the target domain's ability while maintaining the history
domains' abilities. In this paper, we propose a Large Language Model-based
Continual Learning (\texttt{LLM-CL}) model for ABSA. First, we design a domain
knowledge decoupling module to learn a domain-invariant adapter and separate
domain-variant adapters dependently with an orthogonal constraint. Then, we
introduce a domain knowledge warmup strategy to align the representation
between domain-invariant and domain-variant knowledge. In the test phase, we
index the corresponding domain-variant knowledge via domain positioning to not
require each sample's domain ID. Extensive experiments over 19 datasets
indicate that our \texttt{LLM-CL} model obtains new state-of-the-art
performance. |
---|---|
DOI: | 10.48550/arxiv.2405.05496 |