A customer flagged us an alleged false positive… Turns out it was an AI Component in Google Workspace

Prompt Team
July 25, 2024

TL;DR: Recently, our dynamic detection identified an AI component in Google Workspace. Initially flagged as a potential false positive by one of our customers, a deeper inspection revealed it was Google's 'Help me Write' feature, an AI-based assistant designed to help users in drafting and editing documents.

A few weeks ago, one of our customers in the financial services industry reached out to us, noting that we had surfaced a ‘false positive’ alert. They saw on the Prompt Security dashboard that we had fired an alert regarding the access to Google Docs by one of their employees.

In this particular case, the organization has adopted a relatively restrictive policy for new GenAI tools, ensuring their employees only use a handful of approved GenAI applications. Moreover, for the approved tools, Prompt Security ensures that no sensitive data is leaked.

Google Workspace, with the many applications it contains, wasn’t considered a GenAI tool, so this customer hadn’t blocked its use nor defined any specific policies for it. So, it made sense when we received a message from their administrators stating that we had surfaced a ‘false positive’ alert, since we had flagged Google Drive as AI when one of their users was writing a document.

We promptly inspected the alert, and realized that as part of the Google Workspace Labs program, the customer had access to new AI features, including the ‘Help me Write’ assistant of Google Docs. This feature helps users draft brand-new text or rewrite existing text within the document.

Google Docs' 'Help me Write' feature. Source: Google

For each new AI application or tool that emerges, the Prompt Security platform can dynamically detect the AI use case within it and assess the associated risk to the organization. Based on this assessment, and the unique policy of the organization, the platform can allow, block, or sanitize data before it’s shared with the LLM behind the GenAI tool. Moreover, we can educate users on the risk of their actions by surfacing an educational popup.

Our dynamic detection of emerging GenAI tools, including GenAI components used by non-AI sites (e.g. GenAI chatbots, copilots), had automatically detected the feature and implemented the organization’s policy for new AI tools. This policy blocked its access and surfaced an educational popup to the employee. The end result? The employee was warned in real-time that they were using a GenAI application, the appropriate security controls were enforced, and the organization was not exposed to data privacy issues resulting from unsafe usage of GenAI.

Activity Monitor on the Prompt Security platform

How does it work?

The technology behind the Prompt Security’s protection of employee GenAI usage listens to browser traffic and uses advanced heuristics to identify behaviors indicative of GenAI applications. This allows us to detect even previously-unseen GenAI components, ensuring comprehensive security visibility and management of GenAI tools across users and user groups in the organization. We also leverage the “network effect” to continually update our detection models, learning from usage patterns of our customers, making it more effective over time. By analyzing web page content, network traffic, and user interactions in detail, we can accurately determine whether an action is user-generated, product-based, or automated. This provides our customers with high-fidelity, low false-positives, comprehensive detection of employee GenAI usage.

We recently wrote more in-depth about our Dynamic Detection

Legacy security measures, such as URL filtering or DLP, are not equipped to deal with the constantly growing use of AI within organizations. Just as you wouldn’t rely solely on a firewall for your network security strategy, it’s ineffective to depend on static lists for AI detection. Our dynamic detection mechanism addresses this gap by constantly adapting to new AI tools and updates.

Want to learn more about how you can safely manage emerging AI tools in your organization? Book time with us. 

Share this post