Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models
Recent advancements in large language models (LLMs) have demonstrated that progressive refinement, rather than providing a single answer, results in more accurate and thoughtful outputs. However, existing methods often rely heavily on supervision signals to evaluate previous responses, making it dif...
Saved in:
Main Authors | , , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
17.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent advancements in large language models (LLMs) have demonstrated that
progressive refinement, rather than providing a single answer, results in more
accurate and thoughtful outputs. However, existing methods often rely heavily
on supervision signals to evaluate previous responses, making it difficult to
assess output quality in more open-ended scenarios effectively. Additionally,
these methods are typically designed for specific tasks, which limits their
generalization to new domains. To address these limitations, we propose
Progressive Thought Refinement (PTR), a framework that enables LLMs to refine
their responses progressively. PTR operates in two phases: (1) Thought data
construction stage: We propose a weak and strong model collaborative selection
strategy to build a high-quality progressive refinement dataset to ensure
logical consistency from thought to answers, and the answers are gradually
refined in each round. (2) Thought-Mask Fine-Tuning Phase: We design a training
structure to mask the "thought" and adjust loss weights to encourage LLMs to
refine prior thought, teaching them to implicitly understand "how to improve"
rather than "what is correct." Experimental results show that PTR significantly
enhances LLM performance across ten diverse tasks (avg. from 49.6% to 53.5%)
without task-specific fine-tuning. Notably, in more open-ended tasks, LLMs also
demonstrate substantial improvements in the quality of responses beyond mere
accuracy, suggesting that PTR truly teaches LLMs to self-improve over time. |
---|---|
DOI: | 10.48550/arxiv.2410.13413 |