The EU Artificial Intelligence Act
What enterprises need to know as of February 2025
Introduction to the EU AI Act
In March 2024, the European Parliament approved the Artificial Intelligence Act, the first legislative proposal of its kind in Europe. The legislation was published on July 12th, 2024 and entered into force a few weeks later on August 1st.
The EU AI Act aims to address risks, promote ethical use, and establish standards for AI applications. It sets out to ensure AI safety and compliance with laws and human rights, all the while encouraging innovation and adoption of AI systems.
The Act defines software under its purview as being capable of creating outputs that affect digital and real-world systems.Enterprises that use GenAI cannot afford a mere surface-level understanding of the Act. Most GenAI systems are not innately high-risk; however, their adoption and repurposing can make their use cases high-risk. How GenAI systems are categorized in terms of risk (and, correspondingly, in terms of allowance) depends on their respective implementation across the value chain.
To deploy GenAI without facing penalties, companies need to become more familiar with the EU AI Act.
PART 1
How do GenAI systems fit into the EU AI Act’s risk-based approach?
The EU AI Act lays out four risk tiers for AI systems placed on the market or put into service. Categorization within these tiers is based on an AI system’s potential to cause harm to EU residents. Each tier has a different set of possible requirements, which go into effect on a phased timeline.
Here are the four tiers:
1 - Minimal Risk
Minimal-risk systems are not subject to requirements.
For all intents and purposes, they are unregulated.
2 - Limited Risk
Limited-risk systems are subject to transparency requirements that go into effect later this year:
- Disclosing to end-users that content was generated by AI
For example, a GenAI chatbot on a customer service website must include a visible disclaimer that communicates to end-users that their interactions are with an AI system rather than a human. The disclaimer could read “An AI model generated this response.” or “You are chatting with an AI assistant.”
- Designing models in ways that prevent them from generating illegal content
For example, a GenAI text generator for content creation could be trained with specific safeguards that filter out or flag harmful content before it's generated or shared with the end-user.
- Publishing summaries of copyrighted data used for training
For example, an AI art tool built on GenAI must list the types of copyrighted works (e.g., books, articles, and images) it has been trained on. The provider might also include a disclaimer, stating: “This model was trained on publicly available works up until [DATE IN TIME], including a variety of open-source texts and artworks.”
3 - High Risk
High-risk systems are subject to extensive requirements beyond the transparency requirements mentioned above. Some high-risk systems are deemed such in and of themselves (i.e., for any and all use cases), whereas others are general-purpose AI systems used in high-risk contexts (e.g., clinical diagnostics).
4 - Unaccaptable Risk
AI Systems whose risk is deemed unacceptable are prohibited. Such AI systems are those used for, among other things:
- emotion recognition outside of ensuring safety or health
- scraping facial images to build a database
- predicting crime based on personal traits
- gathering biometric data for categorization
- using people’s vulnerabilities to manipulate their behavior
- tracking social behavior to produce a score that will impact real-world treatment
- introducing subliminal messaging in order to alter decision-making
The prohibition of such AI applications is entering into force now – in February 2025.
General-purpose GenAI (GPAI)
Most GenAI systems are general-purpose AI (GPAI), meaning that they have a wide range of intended and unintended possible uses. Under the EU AI Act, all GPAI systems must provide technical documentation and instructions for use, follow EU copyright law, and publish a summary about the content used for training. If a GPAI system presents a systemic risk, it must also conduct model evaluations and adversarial testing, track and report serious incidents, and ensure cybersecurity protections.With respect to risk, most general-purpose GenAI systems fall under the Act’s limited risk tier. This means that they must comply with the transparency requirements detailed above. Other GenAI models are high-risk in practice, meaning that they will be subject to stricter requirements.
PART 2
Identifying high-risk systems and understanding their applicable compliance requirements
A healthy portion of the EU AI Act focuses on defining high-risk AI systems and delineating the requirements to which such systems are subject.
Article 6 of the Act lays out the criteria that categorize AI as high-risk. Article 6(1) details conditions which, when met together, classify systems as high-risk without exception, focusing on safety components and stand-alone safety products. Article 6(2) specifies categories of high-risk systems, including areas with high potential for impact on humans, and grants the European Commission the authority to amend this list as necessary.
High-risk AI applications include critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management, and justice and democratic processes.
Beyond demonstrating sufficient transparency, systems used in these applications must do the following:
Monitor and assess performance continuously so as to identify emerging risks.
Document design and usage. This includes logging systems’ technical design and training data as well as details on how the systems function and interact with end-users.
Register in a dedicated database managed by the European Commission, so that authorities can stay aware of deployment and use.
Allow for human oversight by applying HITL principles where necessary and activating mechanisms for reviewing and correcting outputs.
Implement pre-deployment risk assessment to mitigate risks throughout systems’ lifecycles.
Employ a data governance framework, including procedures for collecting, handling and managing data.
Undergo conformity assessments by accredited bodies to evaluate compliance, both before and after initial deployment.
Report to the European Commission incidents whose consequences could negatively impact EU residents.
The rules concerning some high-risk systems come into effect after 24 months, and the rules concerning other such systems come into effect after 36 months. By February 2026, the European Commission will issue guidelines that “specify the practical implementation of classification rules for high-risk AI systems, including the conditions for exceptions.”
PART 3
Conditions for gaining exemption from high-risk status
In addition to specifying categories of high-risk systems, article 6(2) includes conditions by which systems that otherwise might be considered high-risk can become exempt from such categorization. The primary requirement for exemption is not posing risk to individuals’ health, safety, rights and decision-making capabilities.
To be eligible, a system must not profile humans (doing so automatically disqualifies systems from exemption) and must be intended to do one of the following:
Perform a narrow procedural task.
Improve the result of a previously completed human activity.
Detect decision-making patterns or deviations from prior decision-making patterns, without the intention of bypassing proper human review when replacing or influencing an already-completed human assessment.
Perform a preparatory task to an assessment relevant for the purposes of the use cases identified as high-risk under Article 6(2).
A provider or user that thinks their system is exempt must declare so publicly. They must also register the system in a dedicated database established by the European Commission.
PART 4
Data Governance and Security
Understanding who is responsible for meeting the EU AI Act's data governance requirements can be challenging as many factors are involved. From the array of misconceptions surrounding these requirements, we have chosen to address three of the most common:
MIsconception #1
“Such requirements apply without regard for a system’s delivery model or infrastructure.”
While the EU AI Act’s requirements are meant to apply to AI systems regardless of delivery models and infrastructure, there is a key difference in terms of compliance and responsibility between SaaS GenAI, which is hosted in the cloud and provided as a subscription service, and on-premise GenAI, which is deployed within an organization’s own data center or infrastructure, giving that organization full control over the AI system, data privacy, and security. In order to understand the Act’s data management and cybersecurity requirements, one must understand how the burden of responsibility differs between these two models.
In the case of SaaS GenAI, users (that is, “deployers” – humans that deploy an AI system in a professional capacity, not affected end-users) access tools via the internet, with third-party providers (developers) supplying the necessary software and hardware. As a result, responsibility to adhere to the EU AI Act is shared between the user – who must ensure proper usage and due diligence – and the provider – who is responsible for data security and compliance.
Conversely, with on-premise GenAI, organizations manage all aspects of the system, including hardware, maintenance, and IT resources. While this model places more responsibility on the organization in terms of compliance with the EU AI Act, it also offers greater control and customization. With full oversight and access, the organization can directly manage adherence to regulatory requirements, without relying on third parties.
MIsconception #2
“Non-EU organizations are not subject to the Act’s data governance requirements.”
The EU AI Act applies to AI systems being marketed in the EU or whose high-risk outputs are used in the EU – basically, any AI system that intersects with the EU supply chain. This is without regard for where companies are based and without regard for whether they developed or purchased the AI capabilities they use.
MIsconception #3
“These requirements may soon become outdated, as the EU AI Act will not retain global credibility.”
The EU's strong regulatory influence, coupled with its comprehensive approach to managing emerging technologies, positions the EU AI Act as a potential cornerstone for AI regulation worldwide. Like how the General Data Protection Regulation (GDPR) set a global standard for data protection, the EU AI Act could serve as a key reference point for AI governance across different jurisdictions. Just as GDPR compelled companies worldwide to align with European standards, the EU AI Act may influence the development of global AI policies and drive greater regulatory convergence.
PART 5
Penalties and enforcement mechanisms
Violations of the EU AI Act’s provisions result in the following penalties:
Companies that deploy prohibited AI systems face fines of up to €35 million or up to 7% of their total worldwide gross income for the preceding financial year.
Failure to comply with other requirements set by the Act will result in fines of up to €15 million or up to 3% of total worldwide gross income for the preceding financial year.
Misleading the Act’s enforcement mechanisms by providing them with “incorrect, incomplete or misleading information” will result in fines of up to €7.5 million or up to 1% of total worldwide gross income for the preceding financial year.
For SMEs and startups, the fine will be the lower of the two amounts (between the maximum percentage and the maximum gross figure). For all other entities, the fine will be the higher of the two.
The EU AI Act includes several mechanisms for enforcing compliance with its requirements:
1 - National competent authorities (NCAs)
Each EU member state designates an NCA (or multiple NCAs) to oversee enforcement of the EU AI Act. NCAs monitor AI systems that affect EU residents in their jurisdiction and have the authority to conduct inspections, audits and investigations as needed.
2 - The European Artificial Intelligence Board (EAIB)
The EAIB, created by the European Commission, coordinates NCA enforcement and works to ensure consistent application of the Act’s provisions across all EU jurisdictions, with an emphasis on cross-border cases.
3 - Market Surveillance
The EU AI Act allows national authorities to conduct inspections, checks and audits of AI systems. It also allows approved third-parties to determine whether or not AI systems are complying with the Act.
PART 6
Preparing GenAI systems for compliance
To align with the EU AI Act’s requirements, organizations’ security and risk management leaders should catalog and analyze AI-enabled use cases so as to better understand these use cases’ defining characteristics. This will help them assess the risk of its own AI-enabled use cases and assign them appropriate risk tiers.
The EU AI Act holds enterprises responsible when they deploy AI capabilities, regardless of when such capabilities were first embedded in their systems, so it is especially crucial that organizations understand third-party AI risks. Sourcing, procurement, and vendor management representatives should engage supply chain partners to assess third-party product designs, and multidisciplinary teams should be formed to address third-party AI risks beyond vendor products, including residual risks inherent in outsourced services and hidden in contracts. Attestation can help minimize time on low-risk use cases, which account for most enterprise AI use cases.
PART 7
Prompt Security supports EU AI Act compliance
Prompt Security provides comprehensive capabilities to help organizations meet EU AI Act requirements and maintain compliance:
Continuous monitoring and risk assessment: The platform enables continuous monitoring of AI systems to identify emerging risks, aligning with the Act's requirements for high-risk systems.
Data privacy and governance: Prompt Security helps ensure compliance with regulatory frameworks through robust data privacy controls and governance features. The platform includes capabilities for redacting and sanitizing sensitive data, which helps prevent unauthorized content from entering or leaving GenAI applications.
Content moderation: Built-in content moderation features ensure that AI-generated content aligns with intended use and prevents harmful or inappropriate outputs, helping meet the Act's requirements for preventing illegal content generation.
Comprehensive visibility: The platform provides full logging and monitoring of AI interactions, supporting the Act's requirements for documentation and transparency.
Adaptation to evolving requirements: Prompt Security is built to be flexible, enabling organizations to adjust their compliance strategies as regulatory policies and interpretations change.
Risk management: The platform enables organizations to establish and enforce granular department- and user-specific rules and policies, supporting risk-based approaches that the Act requires.
Through these capabilities, Prompt Security helps enterprises maintain compliance while safely leveraging the benefits of generative AI.
If you want to learn more about how Prompt Security can help you navigate the EU AI Act in your organization, book time with us.
Sources
- https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240717-what-are-highrisk-ai-systems-within-the-meaning-of-the-eus-ai-act-and-what-requirements-apply-to-them
- https://www.prompt.security/blog/quick-overview-of-the-eu-ai-act-the-first-regulation-on-artificial-intelligence
- https://artificialintelligenceact.eu/implementation-timeline/
- https://artificialintelligenceact.eu/high-level-summary/
- https://artificialintelligenceact.eu/high-level-summary/
- https://www.gartner.com/en/documents/5327463
- https://www.gartner.com/en/documents/5488995
- https://www.gartner.com/en/documents/5736383
- https://artificialintelligenceact.eu/article/6/
- https://eur-lex.europa.eu/eli/dir/2019/790/oj/eng