Efficient Prompting Methods for Large Language Models: A Survey

Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Chang, Kaiyan, Xu, Songcheng, Wang, Chenglong, Luo, Yingfeng, Tong, Xiao, Zhu, Jingbo
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 01.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed prompts, particularly when using lengthy and complex prompts to guide and control the behavior of LLMs. As a result, the LLM field has seen a remarkable surge in efficient prompting methods. In this paper, we present a comprehensive overview of these methods. At a high level, efficient prompting methods can broadly be categorized into two approaches: prompting with efficient computation and prompting with efficient design. The former involves various ways of compressing prompts, and the latter employs techniques for automatic prompt optimization. We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
ISSN:2331-8422