Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation

Chain-of-thought prompting significantly boosts the reasoning ability of large language models but still faces three issues: hallucination problem, restricted interpretability, and uncontrollable generation. To address these challenges, we present AgentCOT, a llm-based autonomous agent framework, wh...

Full description

Saved in:
Bibliographic Details
Main Authors Liang, Chen, Feng, Zhifan, Liu, Zihe, Jiang, Wenbin, Xu, Jinan, Chen, Yufeng, Wang, Yong
Format Journal Article
LanguageEnglish
Published 18.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Chain-of-thought prompting significantly boosts the reasoning ability of large language models but still faces three issues: hallucination problem, restricted interpretability, and uncontrollable generation. To address these challenges, we present AgentCOT, a llm-based autonomous agent framework, which can solve complex problems in an agent-style manner by multiple round LLM generation. At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence. In addition, we integrate the step's index into the reasoning process to form a graph structure for complex inference logic. We introduce two new strategies to enhance the performance of AgentCOT.We conduct extensive experiments to verify the effectiveness of our method on six common benchmarks. Results exhibit that our method brings in substantial improvements over current competitive approaches.
DOI:10.48550/arxiv.2409.12411