ExpertPrompting: Instructing Large Language Models to be Distinguished Experts

The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically s...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Xu, Benfeng, Yang, An, Lin, Junyang, Wang, Quan, Zhou, Chang, Zhang, Yongdong, Mao, Zhendong
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 24.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at \url{https://github.com/OFA-Sys/ExpertLLaMA}.
ISSN:2331-8422
DOI:10.48550/arxiv.2305.14688