The Open Worldwide Application Security Project (OWASP) provides guidance on governance, risk management, and compliance for LLM deployment. Led by more than five hundred experts in cybersecurity, AI and IT, the project serves thousands of members – from developers and data scientists to compliance officers and security practitioners – who seek knowledge concerning risks and security solutions for LLM apps and GenAI.
One of OWASP’s most prominent resources for security best practices is its Top 10 for LLM Applications & Generative AI, which lays out the most critical vulnerabilities found in applications that use LLMs. Prompt Security CEO & Co-founder Itamar Golan, an expert in GenAI Security, played a significant role in the list’s compilation and continues to contribute to the intermittent release of new OWASP resources on security guidance.
OWASP Top 10 for LLM Applications and GenAI in 2025
1. Prompt Injection
When an attacker manipulates a large language model (LLM) through carefully crafted inputs.
Prevention and mitigation:
- Measures that can mitigate the impact of prompt injections include enforcing privilege control on LLM access to backend systems.
- Adding a human in the loop for extended functionality .
- Segregating external content from user prompts.
- Instructing models to ignore attempts to modify core instructions.
2. Sensitive Information Disclosure
When sensitive data is undesirably revealed as a consequence of either LLM integration (i.e., an LLM application revealing sensitive data via its outputs) or LLM usage (i.e., a user feeding sensitive data into an external LLM app.)
Prevention and mitigation:
- Enforce strict access control methods to external data sources.
- Use data sanitization and cleansing to prevent user data from entering training model data.
3. Supply Chain
When third-party datasets, pre-trained models and plugins render LLM applications susceptible to security attacks.
Prevention and mitigation:
- Vet suppliers and their policies.
- Use third-party model integrity checks with signing and file hashes, as well as code signing for externally supplied code.
- Conduct regular updates of component inventory.
4. Data and Model Poisoning
When pre-training data, the fine-tuning process, or embedding data is manipulated so as to enable vulnerabilities that compromise a model’s security, effectiveness or ethical behavior.
Prevention and mitigation:
- Track data origins and transformations, and verify data legitimacy during all model development stages.
- Validate model outputs against trusted sources to detect signs of poisoning.
- Ensure sufficient infrastructure controls to prevent undesired access to data sources.
5. Improper Output Handling
When backend systems are exposed due to an LLM output being accepted and passed downstream without sufficient validation, sanitization and handling. Potential consequences of exploitation include XSS and CSRF in web browsers and SSRF, privilege escalation, and remote code execution on backend systems.
Prevention and mitigation:
- Apply proper input validation on responses that head from the model to backend functions.
- Encode model output back to users.
6. Excessive Agency
When LLMs take action without sufficient human oversight.
Prevention and mitigation:
- Set clear guidelines and constraints on LLM autonomy, ensuring that LLM tools only have access to required functions and, when possible, that such functions are closed-ended in nature.
- Where feasible, require human approval.
7. System Prompt Leakage
When the information that guides a model’s output and/or the instructions guiding its behavior contain sensitive data, the unintended exposure of which could make the model vulnerable to attacks.
Prevention and mitigation:
- Externalize sensitive information to systems that the model does not directly access.
- Rely on systems outside of the LLM to control model behavior.
- Implement an independent system (that is, outside the LLM) that can inspect the output to determine if the model is in compliance with expectations.
8. Vector and Embedding Weaknesses
When systems using retrieval-augmented generation with LLMs generate, store, or retrieve vectors and embeddings in such a way that bad actors can inject harmful content, manipulate model outputs, or access sensitive information.
Prevention and mitigation:
- Establish detailed access controls and permission-aware vector and embedding stores.
- Implement robust data validation pipelines for knowledge sources.
- Maintain detailed immutable logs of retrieval activities to identify suspicious behavior.
9. Misinformation
When LLMs produce false or misleading information that appears credible, often as the result of AI hallucinations.
Prevention and mitigation:
- Use techniques like parameter-efficient tuning (PET) and chain-of-thought prompting.
- Implement rigorous fact-checking for sensitive information.
- Establish secure coding practices to prevent the integration of vulnerabilities that may stem from incorrect code suggestions.
10. Unbounded Consumption
When LLMs are manipulated to process excessive amounts of information, opening them up to unauthorized usage and denial of service (DoS) attacks.
Prevention and mitigation:
- Employ measures that prevent inputs from exceeding predetermined size limits.
- Monitor resource allocation closely so that no single user or request can consume excessive computational resources.
- Design the system to maintain partial functionality, even as it degrades due to intolerable computational demand.
Prompt Security’s Vital Role in the Top Ten for LLMs
To achieve a list that is both concise and dependable, OWASP brought the most relevant and forward-thinking voices into the decision-making process. Together with his fellow contributors, Itamar assessed and refined language on various vulnerabilities before determining which language would advance for further consideration.
“The OWASP Top 10 for LLM Apps and GenAI empowers organizations to meet first-rate security standards while keeping pace with Generative AI’s rapid adoption and evolution. I am proud to have supported this project from the beginning and remain committed as it deepens and expands its essential and actionable guidance for navigating the complexities of AI security.” Itamar Golan, CEO & Co-founder of Prompt Security
How Prompt Security Helps
Prompt Security safeguards systems against all of these vulnerabilities and threats, helping make interactions with GenAI applications safe and legitimate. We block prompt injections with minimal latency overhead, prevent chatbot-induced leaks of sensitive data, counter model denial of service attacks by monitoring for abnormal usage, and more. Prompt Security is at the forefront of robust GenAI protection, ensuring your GenAI applications are safe and secure with real-time protection.