Solving math word problems with process- and outcome-based feedback
Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks. When moving beyond prompting, this raises the question of how we should supervise such models: outcome-based approaches which supervise the final result, or process-based appro...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent work has shown that asking language models to generate reasoning steps
improves performance on many reasoning tasks. When moving beyond prompting,
this raises the question of how we should supervise such models: outcome-based
approaches which supervise the final result, or process-based approaches which
supervise the reasoning process itself? Differences between these approaches
might naturally be expected not just in final-answer errors but also in
reasoning errors, which can be difficult to detect and are problematic in many
real-world domains such as education. We run the first comprehensive comparison
between process- and outcome-based approaches trained on a natural language
task, GSM8K. We find that pure outcome-based supervision produces similar
final-answer error rates with less label supervision. However, for correct
reasoning steps we find it necessary to use process-based supervision or
supervision from learned reward models that emulate process-based feedback. In
total, we improve the previous best results from 16.8% $\to$ 12.7% final-answer
error and 14.0% $\to$ 3.4% reasoning error among final-answer-correct
solutions. |
---|---|
DOI: | 10.48550/arxiv.2211.14275 |