Plug and Play with Prompts: A Prompt Tuning Approach for Controlling Text Generation
Presented at Deployable AI Workshop at AAAI-2024 Transformer-based Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts. However, controlling the direction of generation via textual prompts has been challenging, especially with smalle...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Presented at Deployable AI Workshop at AAAI-2024 Transformer-based Large Language Models (LLMs) have shown exceptional
language generation capabilities in response to text-based prompts. However,
controlling the direction of generation via textual prompts has been
challenging, especially with smaller models. In this work, we explore the use
of Prompt Tuning to achieve controlled language generation. Generated text is
steered using prompt embeddings, which are trained using a small language
model, used as a discriminator. Moreover, we demonstrate that these prompt
embeddings can be trained with a very small dataset, with as low as a few
hundred training examples. Our method thus offers a data and parameter
efficient solution towards controlling language model outputs. We carry out
extensive evaluation on four datasets: SST-5 and Yelp (sentiment analysis),
GYAFC (formality) and JIGSAW (toxic language). Finally, we demonstrate the
efficacy of our method towards mitigating harmful, toxic, and biased text
generated by language models. |
---|---|
DOI: | 10.48550/arxiv.2404.05143 |