The European Union's Artificial Intelligence Act is the world's first comprehensive legal framework for AI. Adopted in 2024, it establishes clear rules for how AI systems can be developed, deployed, and used within the EU. Whether you are an AI developer, a company deploying AI tools, or a business leader evaluating your technology strategy, understanding the EU AI Act is now essential.
Understanding the act: a risk-based approach
The EU AI Act does not regulate all AI systems equally. Instead, it takes a risk-based approach, categorizing AI systems into four tiers based on the level of risk they pose to health, safety, and fundamental rights:
- Unacceptable risk: AI systems that pose a clear threat to people's safety, livelihoods, or rights are banned outright. This includes social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement), and AI that exploits vulnerabilities of specific groups.
- High risk: AI systems used in sensitive areas such as recruitment, credit scoring, education, law enforcement, migration, and critical infrastructure. These systems face the most stringent requirements.
- Limited risk: AI systems with specific transparency obligations, such as chatbots and deepfake generators, where users must be informed they are interacting with AI or viewing AI-generated content.
- Minimal risk: The vast majority of AI systems, such as spam filters or AI-enabled video games, which can be used freely with no additional requirements.
High-risk AI systems
The requirements for high-risk AI systems form the backbone of the regulation. If your AI system falls into this category, you must comply with a comprehensive set of obligations:
- Risk management system: Establish and maintain a continuous, iterative process for identifying, analyzing, and mitigating risks throughout the AI system's lifecycle.
- Data governance: Ensure training, validation, and testing datasets are relevant, representative, free of errors, and complete. Data practices must comply with applicable data protection law.
- Technical documentation: Prepare detailed documentation that demonstrates compliance before the system is placed on the market. This documentation must be kept up to date.
- Record-keeping: Implement automatic logging capabilities that enable traceability of the system's functioning and allow for post-market monitoring.
- Transparency and information: Provide clear, adequate information to deployers, including the system's capabilities, limitations, and intended purpose.
- Human oversight: Design the system so that it can be effectively overseen by natural persons during its use, including the ability to override or reverse outputs.
- Accuracy, robustness, and cybersecurity: Ensure appropriate levels of accuracy, robustness against errors and attacks, and cybersecurity throughout the system's lifecycle.
Providers of high-risk AI systems must also undergo a conformity assessment before placing their systems on the EU market, and must register their systems in the EU database for high-risk AI.
Other AI systems: transparency obligations
Even if your AI system is not classified as high-risk, you may still have obligations under the Act. AI systems that interact directly with people must disclose that fact to users, so that individuals know they are communicating with a machine rather than a human. AI systems that generate or manipulate images, audio, or video content (deepfakes) must clearly label that content as artificially generated or manipulated.
These transparency requirements apply broadly and are designed to protect individuals from deception and manipulation, even where the underlying AI system is otherwise low-risk.
General-purpose AI models
The EU AI Act introduces specific rules for general-purpose AI (GPAI) models, including large language models and foundation models. All GPAI providers must:
- Prepare and maintain technical documentation, including training and testing processes and evaluation results.
- Provide information and documentation to downstream providers who integrate the GPAI model into their own AI systems.
- Establish a policy to comply with EU copyright law.
- Publish a sufficiently detailed summary of the content used for training.
GPAI models that pose systemic risk (generally, models trained with more than 10^25 FLOPs of compute) face additional obligations, including model evaluations, adversarial testing, incident tracking and reporting, and adequate cybersecurity protections.
Implementation timeline
The EU AI Act is being implemented in phases, with different provisions taking effect at different times:
- February 2025: Prohibitions on unacceptable-risk AI systems take effect. AI literacy obligations also begin to apply.
- August 2025: Rules for GPAI models and the governance framework (including the European AI Office) become applicable.
- August 2026: The majority of the Act's provisions take effect, including all requirements for high-risk AI systems.
- August 2027: Remaining obligations for specific categories of high-risk AI systems (those covered by existing EU product safety legislation) become applicable.
Each deadline represents a hard compliance date. Organizations need to work backwards from these dates to ensure they are ready.
What this means for you
The EU AI Act affects any organization that develops, deploys, or distributes AI systems in the European market, regardless of where that organization is based. Here is what you should be doing now:
- Map your AI systems and classify them under the Act's risk categories.
- Conduct a gap analysis to identify where your current practices fall short of the requirements.
- Prioritize compliance efforts based on the phased timeline, starting with any systems that might fall under the unacceptable-risk or GPAI categories.
- Build governance structures that support ongoing compliance, including documentation practices, monitoring processes, and clear accountability.
- Engage qualified advisors who understand both the legal requirements and the technical realities of AI systems.
The EU AI Act is not going away, and enforcement will be real. The organizations that prepare early will be best positioned to operate confidently in the European market and beyond.
Need help understanding how the EU AI Act applies to your business? Get in touch and let us help you navigate the path to compliance.