Back to Newsroom

OWASP Dramatically Expands GenAI Security Guidance with Guides for Handling Deepfakes, Building an AI Security Center of Excellence, and a GenAI Security Solutions Landscape

October 31, 2024

Comprehensive resources from respected OWASP Top 10 for LLM Application Security Project open source community empower organizations to adopt and manage generative AI and LLM apps safely

WILMINGTON, Del., Oct. 31, 2024 -- The Open Worldwide Application Security (OWASP) Top 10 for LLM Application Security Project (https://genai.owasp.org), a global, community-led, open source project made up of cybersecurity and AI experts, announced today the release of new security guidance materials as part of its expanded project focus and initiatives that aim to provide actionable guidance and resources for organizations to identify and manage risks to securely adopt, deploy and manage LLM and generative AI (GenAI) applications.

Since its inception in May 2023, the OWASP Top 10 for LLM Risks and Mitigations has become a key industry resource for LLM security best practices. The project now involves over 500 experts from 110+ companies and a global community of 5,500 members, collaborating with standards bodies like NIST and MITRE. Initially focused on LLM vulnerabilities and mitigations, the group's mission has expanded with the rapid growth of LLM and Generative AI, producing additional resources for a broader audience.

In early 2024, the OWASP Top 10 for LLM Application Security Project expanded its focus to include not only developers, data scientists, and security practitioners, but also strategic stakeholders like CISOs and compliance officers. This introduced valuable resources for CISOs, such as the The LLM Cybersecurity and Governance Checklist. The broader scope now includes governance, risk management, and compliance for LLM deployment, and is supported by various project initiatives and working groups dedicated to Risk and Exploit Data Mapping, LLM AI Cyber Threat Intelligence, Secure AI Adoption, and AI Red Teaming & Evaluation.

Today we are announcing the availability of initial new research, guidance and resources to further support organizations in tackling the challenges of securely adopting, developing and deploying LLM and Generative AI systems and applications with a comprehensive strategy encompassing governance, collaboration and practical tools.

New Resources:

  • The Guide for Preparing and Responding to Deepfake Events addresses the growing threat of "hyper realistic digital forgeries." Stemming from The AI Cyber Threat Intelligence initiative that focuses on exploit detectability, differences in model outputs, and ethical AI usage, this new resource highlights practical and pragmatic defense strategies to ensure organizations are secure as deepfake technology continues to improve. Read the blog from the research team to learn more.

  • The Center of Excellence Guide provides a business framework and set of best practices designed to help organizations establish an AI Security center of excellence or enhance their existing efforts establishing collaborative environments for managing generative AI security adoption and risk management that emphasizes cross-departmental cooperation among security, legal, data science and operational teams. As part of the Secure AI Adoption initiative, this guide enables organizations to develop and enforce security policies, educate staff on AI use and ensure that generative AI technologies are deployed securely and responsibly.

  • The AI Security Solution Landscape Guide serves as a comprehensive reference, offering insights into both open source and commercial solutions for securing LLMs and generative AI applications. By categorizing existing and emerging security solutions, it provides organizations with guidance to address risks identified in the Top Ten list effectively.

The project is continually expanding its scope with dedicated working groups focused on addressing key research gaps and producing additional resources to help users and organizations onboard new AI tools and technologies safely.

Steve Wilson, project lead for the OWASP Top 10 for LLM Project, said: "We're two years into the generative AI boom, and attackers are using AI to get smarter and faster. Security leaders and software developers need to do the same. Our new resources arm organizations with the tools they need to stay ahead of these increasingly sophisticated threats."

Scott Clinton, co-project lead for the OWASP Top 10 for LLM Project, said: "As generative AI reshapes industries, its security challenges grow equally complex, leaving security teams behind and threat actors empowered. The strength of the project is its open source, community-led collaboration, uniting diverse cybersecurity and AI expertise to deliver expert insights to benefit the industry. These insights have allowed us to quickly uncover and fill gaps in security research and guidance, translating complex principles into practical, actionable resources that will evolve with the fast-changing Gen AI landscape to help security leaders, practitioners, and developers."

Itamar Golan, CEO & Co-founder of Prompt Security, said: "The OWASP Top 10 for LLM Guide provides essential, actionable guidance, empowering organizations to meet AI security standards while keeping pace with Generative AI’s rapid adoption and evolution. We’re proud to have supported this project from the beginning and remain committed as it expands to help organizations navigate the complexities of AI security.”

About The OWASP Top 10 for LLM Project
The OWASP Top 10 for LLMs and Generative AI security project community-led, industry neutral, open source project comprises over 500 global cybersecurity experts, data scientists, developers, innovators, IT and security leadership professionals collaborating with organizations like NIST, MITRE, CSA, and ISO. It aims to provide cyber threat research guidance, document top risks and mitigations, and offer actionable solutions for securing the rapidly evolving landscape of LLMs and generative AI. Supported by over 110 companies, the community has grown to more than 5,500 members. More information is available at https://genai.owasp.org/.

About OWASP
The Open Worldwide Application Security Project (OWASP) is a nonprofit foundation that works to improve software security. We are an open community dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted. Our projects, tools, documents, forums, and chapters are free and open to anyone interested in improving application security. Programming includes community-led open-source projects, including code, documentation, and standards, over 250+ local chapters worldwide, tens of thousands of members and participants and Industry-leading educational and training conferences. Additional information is available at https://owasp.org/

This press release was originally published here: https://genai.owasp.org/2024/10/28/owasp-dramatically-expands-genai-security-guidance-with-guides-for-handling-deepfakes-building-an-ai-security-center-of-excellence-and-a-gen-ai-security-solutions-guide/

Share this post