Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

Security concerns for large language models (LLMs) have recently escalated, focusing on thwarting jailbreaking attempts in discrete prompts. However, the exploration of jailbreak vulnerabilities arising from continuous embeddings has been limited, as prior approaches primarily involved appending dis...

Full description

Saved in:
Bibliographic Details
Main Authors Xu, Zihao, Liu, Yi, Deng, Gelei, Wang, Kailong, Li, Yuekang, Shi, Ling, Picek, Stjepan
Format Journal Article
LanguageEnglish
Published 16.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Security concerns for large language models (LLMs) have recently escalated, focusing on thwarting jailbreaking attempts in discrete prompts. However, the exploration of jailbreak vulnerabilities arising from continuous embeddings has been limited, as prior approaches primarily involved appending discrete or continuous suffixes to inputs. Our study presents a novel channel for conducting direct attacks on LLM inputs, eliminating the need for suffix addition or specific questions provided that the desired output is predefined. We additionally observe that extensive iterations often lead to overfitting, characterized by repetition in the output. To counteract this, we propose a simple yet effective strategy named CLIP. Our experiments show that for an input length of 40 at iteration 1000, applying CLIP improves the ASR from 62% to 83%
DOI:10.48550/arxiv.2407.13796