AirGapAgent: Protecting Privacy-Conscious Conversational Agents

The growing use of large language model (LLM)-based conversational agents to manage sensitive user data raises significant privacy concerns. While these agents excel at understanding and acting on context, this capability can be exploited by malicious actors. We introduce a novel threat model where...

Full description

Saved in:
Bibliographic Details
Main Authors Bagdasarian, Eugene, Yi, Ren, Ghalebikesabi, Sahra, Kairouz, Peter, Gruteser, Marco, Oh, Sewoong, Balle, Borja, Ramage, Daniel
Format Journal Article
LanguageEnglish
Published 08.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The growing use of large language model (LLM)-based conversational agents to manage sensitive user data raises significant privacy concerns. While these agents excel at understanding and acting on context, this capability can be exploited by malicious actors. We introduce a novel threat model where adversarial third-party apps manipulate the context of interaction to trick LLM-based agents into revealing private information not relevant to the task at hand. Grounded in the framework of contextual integrity, we introduce AirGapAgent, a privacy-conscious agent designed to prevent unintended data leakage by restricting the agent's access to only the data necessary for a specific task. Extensive experiments using Gemini, GPT, and Mistral models as agents validate our approach's effectiveness in mitigating this form of context hijacking while maintaining core agent functionality. For example, we show that a single-query context hijacking attack on a Gemini Ultra agent reduces its ability to protect user data from 94% to 45%, while an AirGapAgent achieves 97% protection, rendering the same attack ineffective.
DOI:10.48550/arxiv.2405.05175