Papers
arxiv:2601.04566

BackdoorAgent: A Unified Framework for Backdoor Attacks on LLM-based Agents

Published on Jan 11
Authors:
,
,
,
,
,
,
,
,

Abstract

BackdoorAgent framework analyzes cross-stage backdoor propagation in LLM agents across planning, memory, and tool-use stages, revealing widespread trigger persistence across agent workflows.

AI-generated summary

Large language model (LLM) agents execute tasks through multi-step workflows that combine planning, memory, and tool use. While this design enables autonomy, it also expands the attack surface for backdoor threats. Backdoor triggers injected into specific stages of an agent workflow can persist through multiple intermediate states and adversely influence downstream outputs. However, existing studies remain fragmented and typically analyze individual attack vectors in isolation, leaving the cross-stage interaction and propagation of backdoor triggers poorly understood from an agent-centric perspective. To fill this gap, we propose BackdoorAgent, a modular and stage-aware framework that provides a unified, agent-centric view of backdoor threats in LLM agents. BackdoorAgent structures the attack surface into three functional stages of agentic workflows, including planning attacks, memory attacks, and tool-use attacks, and instruments agent execution to enable systematic analysis of trigger activation and propagation across different stages. Building on this framework, we construct a standardized benchmark spanning four representative agent applications: Agent QA, Agent Code, Agent Web, and Agent Drive, covering both language-only and multimodal settings. Our empirical analysis shows that triggers implanted at a single stage can persist across multiple steps and propagate through intermediate states. For instance, when using a GPT-based backbone, we observe trigger persistence in 43.58\% of planning attacks, 77.97\% of memory attacks, and 60.28\% of tool-stage attacks, highlighting the vulnerabilities of the agentic workflow itself to backdoor threats. To facilitate reproducibility and future research, our code and benchmark are publicly available at GitHub.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.04566 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.04566 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.