Characterizing Context Influence and Hallucination in Summarization
Although Large Language Models (LLMs) have achieved remarkable performance in numerous downstream tasks, their ubiquity has raised two significant concerns. One is that LLMs can hallucinate by generating content that contradicts relevant contextual information; the other is that LLMs can inadvertent...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Although Large Language Models (LLMs) have achieved remarkable performance in
numerous downstream tasks, their ubiquity has raised two significant concerns.
One is that LLMs can hallucinate by generating content that contradicts
relevant contextual information; the other is that LLMs can inadvertently leak
private information due to input regurgitation. Many prior works have
extensively studied each concern independently, but none have investigated them
simultaneously. Furthermore, auditing the influence of provided context during
open-ended generation with a privacy emphasis is understudied. To this end, we
comprehensively characterize the influence and hallucination of contextual
information during summarization. We introduce a definition for context
influence and Context-Influence Decoding (CID), and then we show that
amplifying the context (by factoring out prior knowledge) and the context being
out of distribution with respect to prior knowledge increases the context's
influence on an LLM. Moreover, we show that context influence gives a lower
bound of the private information leakage of CID. We corroborate our analytical
findings with experimental evaluations that show improving the F1 ROGUE-L score
on CNN-DM for LLaMA 3 by $\textbf{10}$% over regular decoding also leads to
$\textbf{1.5x}$ more influence by the context. Moreover, we empirically
evaluate how context influence and hallucination are affected by (1) model
capacity, (2) context size, (3) the length of the current response, and (4)
different token $n$-grams of the context. Our code can be accessed here:
https://github.com/james-flemings/context_influence. |
---|---|
DOI: | 10.48550/arxiv.2410.03026 |