8 Real World Incidents Related to AI

August 31, 2024

Since the public release of ChatGPT in late 2022, there have already been several incidents that have made it to the headlines, incidents that involved the use of AI by employees or the deployment of AI capabilities on homegrown applications. We’ve put together a short recap of 8 prominent examples that involved AI.

Some of the questions we get asked with most frequency from many of our prospects and the ecosystem at large when embarking on their GenAI Security journey, revolve around "How real are the risks related to AI, have there been actual incidents involving AI?" The answer is yes. Numerous organizations have already faced security breaches of varying severity which involved either the unchecked use of AI by employees or the deployment of AI as part of their offering or services, such as customer service AI chatbots.

Truth is, many organizations aren’t even aware of the sprawl of AI given that they don’t have the necessary tools in place to gain the visibility and policy management to monitor it, let alone prevent data leaks or other types of risks associated with AI tools. And while that alone will not suffice to prevent the next ‘WannaCry’ of GenAI, it’s best to start setting the foundations to manage the risk of AI while still embracing the productivity gains it brings.

8 examples of real world incidents related to the use of AI 

1. Samsung Data Leak via ChatGPT:

May 2023

Samsung employees accidentally leaked confidential information by using ChatGPT to review internal code and documents. As a result, Samsung decided to ban the use of generative AI tools across the company to prevent future breaches.

Read more: https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak 

2. Chevrolet AI Chatbot Offers Car for $1:

December 2023

A Chevrolet dealership’s AI chatbot was tricked into offering a $76,000 Tahoe for just $1. A user easily manipulated the AI chatbot’s responses, proving that these customer-facing tools frequently present on websites can be exploited through simple prompts.

Read more: https://www.upworthy.com/prankster-tricks-a-gm-dealership-chatbot-to-sell-him-a-76000-chevy-tahoe-for-1-rp2  

3. Air Canada Refund Incident:

February 2024

An Air Canada customer reportedly manipulated the company’s AI chatbot to obtain a refund larger than expected. The chatbot misinterpreted the request, leading to an overpayment. This showcases that beyond brand reputation damages of different caliber, the unmonitored and insecure deployment of AI-powered chatbots can lead to financial losses.

Read more: https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/ 

4. Google Bard’s Misinformation Incident:

February 2023

Shortly after launching its Bard AI, Google encountered credibility issues when the chatbot provided incorrect information during a demonstration about the James Webb Space Telescope. The error caused an immediate dive of the Alphabet’s stock price, wiping $100bn of the company’s value. 

Read more: https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/ 

Image source: https://www.debugbar.com/google-bard-a-mistake-makes-googles-stock-price-plunge/

5. DPD Chatbot Incident:

January 2024

Delivery firm DPD temporarily disabled a portion of its AI-powered chatbot after a customer tested its capabilities by asking it to perform unconventional tasks, such as telling jokes and criticizing the company. This incident highlighted the potential risks of deploying LLMs in customer-facing applications where unexpected inputs can lead to unusual or inappropriate responses.

Read more: https://www.bbc.com/news/technology-68025677 

6. Snapchat’s “My AI” Incident:

August 2023

Snapchat’s AI chatbot, powered by OpenAI’s GPT model, faced backlash when users reported it giving concerning responses, including potentially harmful advice. Although designed to engage in conversations and provide recommendations, it sometimes produced alarming responses, raising questions about its safety and reliability in a social media environment.

Read more: https://techcrunch.com/2023/08/16/snapchats-my-ai-goes-rogue-posts-to-stories-but-snap-confirms-it-was-just-a-glitch/

7. Amazon data used for training

January 2023

In one of the earliest incidents related to the use of Generative AI tools by employees, Amazon warned its employees not to share confidential information with ChatGPT after noticing different instances in which responses from the LLM closely resembled sensitive company information, which was probably used as training data. A research by Walter Haydock estimated the losses of this incident at over $1M.

Read more: https://www.businessinsider.com/amazon-chatgpt-openai-warns-employees-not-share-confidential-information-microsoft-2023-1

8. Data Exfiltration via Slack AI:

August 2024

The AI service of Slack provides generative tools within the application for tasks like summarizing long conversations, finding answers to questions, and summarizing rarely visited channels. Researchers demonstrated how Slack’s AI can be tricked into leaking data from private channels via prompt injection.

Read more: https://www.theregister.com/2024/08/21/slack_ai_prompt_injection/

Learn more about AI risks and vulnerabilities: https://www.prompt.security/resources/genai-risks-and-vulnerabilities

Share this post