Prompt Security Named a Representative Vendor in the Gartner® Innovation Guide for Generative AI TRiSM

May 6, 2024
On this Page

Generative AI brings a wide array of new risks, which Gartner maps into three main categories: content anomalies, data protection and AI application security risks.

In Gartner's recently updated Innovation Guide for Generative AI in Trust, Risk and Security Management (TRiSM) we find a market overview and its expected dynamics given the massive adoption of GenAI, a thorough explanation of the outlined categories of risk, and how different vendors tackle some or all of these risks.

In this short blog post we share some highlights of the report, which is available with a Gartner license here

Gartner defines the TRiSM market as comprising multiple software and services segments that support security, data protection and risk mitigation for adopters of GenAI applications and model interactions. Emphasizing that legacy controls are not enough to mitigate risks associated with using hosted GenAI models, the report explains that GenAI TRiSM tools include solutions for content anomaly detection, data protection and AI application security. These tools complement associated TRiSM measures implemented by builders or owners of GenAI models, applications and agents.

Prompt Security was included in the TRiSM Guide as a representative TRiSM vendor

At Prompt Security we are proud to be included in the Guide as a representative TRiSM vendor, ticking the boxes for delivering the key technological capabilities to deal with the three outlined categories of risk that Gartner defines: content anomaly detection, privacy and data protection and AI application security.

Unique GenAI risks divided into three primary categories, and Gartner’s recommendations for each of them

Content anomaly detection

  • Enterprise content transmitted through prompts or other methods that results in compromise of confidential data inputs
  • Unacceptable or malicious use
  • Hallucinations or inaccurate, illegal, copyright-infringing and otherwise unwanted outputs that compromise enterprise decision making

Gartner’s recommendation: Use Content Anomaly Detection to Manage Input and Output Risks

Input risks: Information and data submitted to GenAI models can result in data compromise if sent to environments that are not adequately secured and protected (e.g., if they are not encrypted in transit and, if stored, at rest). Inputs to GenAI models must also be screened to ensure they meet the enterprise’s acceptable use policies. Today, most of these inputs take the form of an interactive prompt. Security leaders must address input and output risks for applications leveraging more automated forms of inputs, such as API calls from other applications, and outputs directly transmitted to software agents.

Output risks: Outputs from GenAI models are unreliable, given an unpredictable rate of factual errors and hallucinations. Outputs can also be biased and potentially include copyright material or other unwanted, malicious, illegitimate or illegal information. This puts the enterprise at risk of being sued for illegally using proprietary materials output from the GenAI models’ applications, or making misinformed decisions.

Data protection

  • Data leakage and compromised confidentiality in hosted vendor environment or internal, self-managed environments
  • Inability to govern privacy and data protection policies in externally hosted environments
  • Difficulty conducting privacy impact assessments and complying with various regional regulations due to the “black box” nature of the third-party models

Gartner’s recommendation: Manage Data Protection Risks, Especially Inherent to External Environments

Using private, proprietary, sensitive or confidential information as inputs into hosted GenAI models comes with risks of data leakage and potential violations of existing regulations. Organizations must monitor and enforce privacy, data confidentiality and governance in the environments where these vendors’ models or applications are hosted.

AI application security

  • Adversarial prompting attacks
  • Vector database attacks
  • Hacker access to model states and parameters

Gartner’s recommendation: Manage AI Application Security Risks by Safeguarding New GenAI Attack Vectors

AI applications include new components to orchestrate the use of the models. This introduces security threats that conventional application security controls do not yet address, such as:

  • Adversarial prompting, which includes prompt guardrail “hijacking” and prompt injection attacks
  • Vector database breaches
  • Malicious hacker access to model states and parameters
  • Vulnerable and exploitable integration points with third-party models of applications (for example, API calls or plug-ins and other connectors)
  • Software supply chain risks

If you want to learn more about TRiSM, and how Prompt Security can help your organization mitigate the risks across the three main categories outlined, book time with us.

Gartner, Innovation Guide for Generative AI in Trust, Risk and Security Management, By Avivah Litan, Jeremy D'Hoinne,Gabriele Rigon, 12 April 2024

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Share this post