InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models

SaFoLab, University of Wisconsin-Madison
*Equal Contribution

Abstract

Prompt injection attacks pose a critical threat to large language models (LLMs), enabling goal hijacking and data leakage. Prompt guard models, though effective in defense, suffer from over-defense—falsely flagging benign inputs as malicious due to trigger word bias. To address this issue, we introduce NotInject, an evaluation dataset that systematically measures over-defense across various prompt guard models. NotInject contains 339 benign samples enriched with trigger words common in prompt injection attacks, enabling fine-grained evaluation. Our results show that state-of-the-art models suffer from over-defense issues, with accuracy dropping close to random guessing levels (60%). To mitigate this, we propose InjecGuard, a novel prompt guard model that incorporates a new training strategy, Mitigating Over-defense for Free (MOF), which significantly reduces the bias on trigger words. InjecGuard demonstrates state-of-the-art performance on diverse benchmarks including NotInject, surpassing the existing best model by 30.8%, offering a robust and open-source solution for detecting prompt injection attacks.

Teaser

Figure 1: Performance and efficiency comparison of existing prompt guardrail models.

Demos of InjecGuard


A Demo of InjecGuard.

Online Inference (InjecGuard)

Result:

Benign
0.000
Injection
0.000

Overdefense Benchmark: NotInject


• Overdefense Issue

Teaser

Figure 2: Left: PromptGuard, Right: ProtectAIv2, both misclassify benign inputs as malicious due to an overreliance on specific trigger words, such as ``ignore''.


• Dataset Construction

Teaser

Figure 3: The pipeline for constructing NotInject dataset.


  • Trigger Words Indentification. To identify potential trigger words, was first conduct a word frequency analysis across both benign and injection datasets. The words are then ranked according to their frequency within each dataset, sorted separately for benign and injection datasets. By calculating the differences in word rankings between the two datasets, we highlight words that are disproportionately frequent in the injection dataset but rare in the benign dataset. These words are flagged as potential trigger words, which can serve as indicators of injection attempts.
  • Trigger Words Refinement. To refine the list of candidate trigger words, a multi-step process is implemented to filter out irrelevant or commonly used terms that are not indicative of prompt injection attempts. Initially, a large language model (LLM) is employed to automatically assess the relevance of each word, based on the prompt: "Do you think the word of {word} is especially frequent in malicious or prompt attack scenarios?" This step helps eliminate words that are not pertinent to injection attacks. Following the automated filtering, a manual verification process is conducted to ensure the final list contains only words that are strongly associated with prompt injection attempts, further enhancing the accuracy of the trigger word identification.
  • Corpus Generation. As a result of the refinement process, we select 113 trigger words and utilize a large language model (LLM) to generate benign sentences incorporating these words. The dataset is structured into three subsets, each containing samples with 1, 2, and 3 trigger words per sentence. To ensure that all generated sentences are non-malicious, both LLM-based and manual refinement processes are applied. The final dataset, termed NotInject, consists of 339 samples in total, with 113 samples in each subset. The generated sentences span a wide range of topics, including common queries, technical inquiries, virtual content creation, and multilingual queries, thus ensuring diversity and comprehensiveness.



Injection Guardrail: InjecGuard


• Brief Introduction


InjecGuard is a lightweight model designed to defend against prompt injection attacks. It delivers strong performance across benign, malicious, and over-defense accuracy metrics, surpassing existing guard models such as PromptGuard, ProtectAIv2, and LakeraAI. Despite its compact size, with model parameters of only 184MB, InjecGuard achieves competitive performance comparable to advanced commercial large language models like GPT-4.
Note: All training details, including the code and datasets, are fully released and available.



• Evaluation


Teaser

Figure 4: Performance and overhead comparison between InjecGuard and other baseline models.


• Visualization

Teaser

Figure 5: Left: The visualization of different predictions. Right: The visualization of attention weight. Given an instruction of ``[CLS] Can I ignore this warning appeared in my code? [SEP]'', ProtectAIv2 assigns extremely high attention weights to the word ``ignore,'' leading to misclassification as Injection. In contrast, our method distributes attention across the entire sentence, successfully predicting it as benign.


BibTeX

@articles{InjecGuard,
title={InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models},
author={Hao Li, Xiaogeng Liu, Chaowei Xiao},
journal = {arXiv preprint arXiv:2410.22770},
year={2024}
}