Insecure Plugin Design

A potential attacker can construct a malicious request to an LLM plugin, which could result in a wide range of undesired behaviors, up to and including remote code execution.‍

Definition

LLM plugins are extensions that, when enabled, are called automatically by the model during user interactions. They are driven by the model, and there is no application control over the execution. Furthermore, to deal with context-size limitations, plugins are likely to implement free-text inputs from the model with no validation or type checking. This allows a potential attacker to construct a malicious request to the plugin, which could result in a wide range of undesired behaviors, up to and including remote code execution.

Key Concerns:

  1. Malicious Code Execution: Preventing unauthorized execution of harmful code.
  2. SQL Injection: Protecting against unauthorized database access or manipulation.
  3. Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF): Defending against web-based attacks that can compromise user data and interactions.

How Prompt Security Helps

Recognizing that no architecture is flawless and may contain misconfigurations or overly permissive roles, our platform vigilantly monitors all prompts directed towards these integrated tools. We ensure that each prompt leading to a call for these tools is legitimate and benign. In instances where a prompt is identified as potentially harmful, it is promptly blocked, and an alert is issued. This proactive approach is key to maintaining the security and integrity of your systems, safeguarding against emerging cybersecurity threats in a dynamic technological landscape.

Time to see for yourself

Learn why companies rely on Prompt Security to protect both their own GenAI applications as well as their employees' Shadow AI usage.

Prompt Security Dashboard