Imperceptible Content Poisoning in LLM-Powered Applications

Large Language Models (LLMs) have shown their superior capability in natural language processing, promoting extensive LLM-powered applications to be the new portals for people to access various content on the Internet. However, LLM-powered applications do not have sufficient security considerations...

Full description

Saved in:
Bibliographic Details
Published inIEEE/ACM International Conference on Automated Software Engineering : [proceedings] pp. 242 - 254
Main Authors Zhang, Quan, Zhou, Chijin, Go, Gwihwan, Zeng, Binqi, Shi, Heyuan, Xu, Zichen, Jiang, Yu
Format Conference Proceeding
LanguageEnglish
Published ACM 27.10.2024
Subjects
Online AccessGet full text
ISSN2643-1572
DOI10.1145/3691620.3695001

Cover

Loading…
More Information
Summary:Large Language Models (LLMs) have shown their superior capability in natural language processing, promoting extensive LLM-powered applications to be the new portals for people to access various content on the Internet. However, LLM-powered applications do not have sufficient security considerations on untrusted content, leading to potential threats. In this paper, we reveal content poisoning, where attackers can tailor attack content that appears benign to humans but causes LLM-powered applications to generate malicious responses. To highlight the impact of content poisoning and inspire the development of effective defenses, we systematically analyze the attack, focusing on the attack modes in various content, exploitable design features of LLM application frameworks, and the generation of attack content. We carry out a comprehensive evaluation on five LLMs, where content poisoning achieves an average attack success rate of 89.60%. Additionally, we assess content poisoning on four popular LLM-powered applications, achieving the attack on 72.00% of the content. Our experimental results also show that existing defenses are ineffective against content poisoning. Finally, we discuss potential mitigations for LLM application frameworks to counter content poisoning.CCS CONCEPTS* Computing methodologies → Machine learning; * Security and privacy;
ISSN:2643-1572
DOI:10.1145/3691620.3695001