MathChat: Converse to Tackle Challenging Math Problems with LLM Agents

Employing Large Language Models (LLMs) to address mathematical problems is an intriguing research endeavor, considering the abundance of math problems expressed in natural language across numerous science and engineering fields. LLMs, with their generalized ability, are used as a foundation model to...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Wu, Yiran, Jia, Feiran, Zhang, Shaokun, Li, Hangyu, Zhu, Erkang, Wang, Yue, Lee, Yin Tat, Peng, Richard, Wu, Qingyun, Wang, Chi
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 28.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Employing Large Language Models (LLMs) to address mathematical problems is an intriguing research endeavor, considering the abundance of math problems expressed in natural language across numerous science and engineering fields. LLMs, with their generalized ability, are used as a foundation model to build AI agents for different tasks. In this paper, we study the effectiveness of utilizing LLM agents to solve math problems through conversations. We propose MathChat, a conversational problem-solving framework designed for math problems. MathChat consists of an LLM agent and a user proxy agent which is responsible for tool execution and additional guidance. This synergy facilitates a collaborative problem-solving process, where the agents engage in a dialogue to solve the problems. We perform evaluation on difficult high school competition problems from the MATH dataset. Utilizing Python, we show that MathChat can further improve previous tool-using prompting methods by 6%.
ISSN:2331-8422