Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist
Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs). How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical iss...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Exceptional mathematical reasoning ability is one of the key features that
demonstrate the power of large language models (LLMs). How to comprehensively
define and evaluate the mathematical abilities of LLMs, and even reflect the
user experience in real-world scenarios, has emerged as a critical issue.
Current benchmarks predominantly concentrate on problem-solving capabilities,
presenting a substantial risk of model overfitting and fails to accurately
measure the genuine mathematical reasoning abilities. In this paper, we argue
that if a model really understands a problem, it should be robustly applied
across a diverse array of tasks. To this end, we introduce MathCheck, a
well-designed checklist for testing task generalization and reasoning
robustness, as well as an automatic tool to generate checklists efficiently.
MathCheck includes multiple mathematical reasoning tasks and robustness tests
to facilitate a comprehensive evaluation of both mathematical reasoning ability
and behavior testing. Utilizing MathCheck, we develop MathCheck-GSM and
MathCheck-GEO to assess math textual reasoning and multi-modal reasoning
abilities, respectively, serving as upgraded versions of benchmarks including
GSM8k, GeoQA, UniGeo, and Geometry3K. We adopt MathCheck-GSM and MathCheck-GEO
to evaluate 26 LLMs and 17 MLLMs. Our results demonstrate that while frontier
LLMs like GPT-4o continue to excel in various abilities on the checklist, many
other model families exhibit a significant decline. Further experiments
indicate that, compared to traditional math benchmarks, MathCheck better
reflects true mathematical abilities and represents mathematical intelligence
more linearly, thereby supporting our design. Using MathCheck, we can
efficiently conduct informative behavior analysis to deeply investigate models.
Finally, we show that our checklist paradigm can easily extend to other
reasoning tasks. |
---|---|
DOI: | 10.48550/arxiv.2407.08733 |