DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning

We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM), DISC-FinLLM. Our methodology improves general LLMs by endowing them with multi-turn question answering abilities, domain text processing capabilities, mathematical computation skills, and retrieval-enh...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Chen, Wei, Wang, Qiushi, Long, Zefei, Zhang, Xianyin, Lu, Zhongtian, Li, Bingxuan, Wang, Siyuan, Xu, Jiarong, Bai, Xiang, Huang, Xuanjing, Wei, Zhongyu
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 25.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM), DISC-FinLLM. Our methodology improves general LLMs by endowing them with multi-turn question answering abilities, domain text processing capabilities, mathematical computation skills, and retrieval-enhanced generation capabilities. We build a financial instruction-tuning dataset named DISC-FIN-SFT, including instruction samples of four categories (consulting, NLP tasks, computing and retrieval-augmented generation). Evaluations conducted on multiple benchmarks demonstrate that our model performs better than baseline models in various financial scenarios. Further resources can be found at https://github.com/FudanDISC/DISC-FinLLM.
ISSN:2331-8422