Data Interpreter: An LLM Agent For Data Science

Large Language Model (LLM)-based agents have demonstrated remarkable effectiveness. However, their performance can be compromised in data science scenarios that require real-time data adjustment, expertise in optimization due to complex dependencies among various tasks, and the ability to identify l...

Full description

Saved in:
Bibliographic Details
Main Authors Hong, Sirui, Lin, Yizhang, Liu, Bang, Liu, Bangbang, Wu, Binhao, Li, Danyang, Chen, Jiaqi, Zhang, Jiayi, Wang, Jinlin, Zhang, Li, Zhang, Lingyao, Yang, Min, Zhuge, Mingchen, Guo, Taicheng, Zhou, Tuo, Tao, Wei, Wang, Wenyi, Tang, Xiangru, Lu, Xiangtao, Zheng, Xiawu, Liang, Xinbing, Fei, Yaying, Cheng, Yuheng, Xu, Zongze, Wu, Chenglin
Format Journal Article
LanguageEnglish
Published 28.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large Language Model (LLM)-based agents have demonstrated remarkable effectiveness. However, their performance can be compromised in data science scenarios that require real-time data adjustment, expertise in optimization due to complex dependencies among various tasks, and the ability to identify logical errors for precise reasoning. In this study, we introduce the Data Interpreter, a solution designed to solve with code that emphasizes three pivotal techniques to augment problem-solving in data science: 1) dynamic planning with hierarchical graph structures for real-time data adaptability;2) tool integration dynamically to enhance code proficiency during execution, enriching the requisite expertise;3) logical inconsistency identification in feedback, and efficiency enhancement through experience recording. We evaluate the Data Interpreter on various data science and real-world tasks. Compared to open-source baselines, it demonstrated superior performance, exhibiting significant improvements in machine learning tasks, increasing from 0.86 to 0.95. Additionally, it showed a 26% increase in the MATH dataset and a remarkable 112% improvement in open-ended tasks. The solution will be released at https://github.com/geekan/MetaGPT.
DOI:10.48550/arxiv.2402.18679