Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced prompting techniques and the necessity of fine-tuning with high-quality data to augment LLMs' reas...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite the impressive capabilities of Large Language Models (LLMs) on
various tasks, they still struggle with scenarios that involves complex
reasoning and planning. Recent work proposed advanced prompting techniques and
the necessity of fine-tuning with high-quality data to augment LLMs' reasoning
abilities. However, these approaches are inherently constrained by data
availability and quality. In light of this, self-correction and self-learning
emerge as viable solutions, employing strategies that allow LLMs to refine
their outputs and learn from self-assessed rewards. Yet, the efficacy of LLMs
in self-refining its response, particularly in complex reasoning and planning
task, remains dubious. In this paper, we introduce AlphaLLM for the
self-improvements of LLMs, which integrates Monte Carlo Tree Search (MCTS) with
LLMs to establish a self-improving loop, thereby enhancing the capabilities of
LLMs without additional annotations. Drawing inspiration from the success of
AlphaGo, AlphaLLM addresses the unique challenges of combining MCTS with LLM
for self-improvement, including data scarcity, the vastness search spaces of
language tasks, and the subjective nature of feedback in language tasks.
AlphaLLM is comprised of prompt synthesis component, an efficient MCTS approach
tailored for language tasks, and a trio of critic models for precise feedback.
Our experimental results in mathematical reasoning tasks demonstrate that
AlphaLLM significantly enhances the performance of LLMs without additional
annotations, showing the potential for self-improvement in LLMs. |
---|---|
DOI: | 10.48550/arxiv.2404.12253 |