All articles
EU AI ActApril 2, 202610 min

How to Classify AI Systems Under the EU AI Act

Practical guide to risk classification of your AI systems under the EU AI Act - with decision tree, Annex III categories, and concrete examples.

The EU AI Act has been in force since August 2024. From August 2026, obligations for high-risk AI systems take effect. If you haven't classified your AI systems yet, you have a problem. This article shows step-by-step how the classification works and what you need to do.

Why Classification Is Now a Priority

On August 2, 2026, the obligations for high-risk AI systems under Annex III of the AI Act (Regulation (EU) 2024/1689) will become applicable. That's four months away. Four months to know which of your AI systems are affected, what obligations follow, and how to implement them.

If you think this is a distant future concern, look at the penalty framework: up to €35 million or 7% of global annual turnover for prohibited practices. Up to €15 million or 3% for violations involving high-risk systems. Up to €7.5 million or 1% for providing false information to authorities (Art. 99 AI Act).

This is not theoretical risk. The prohibited AI practices under Art. 5 and the AI literacy obligation under Art. 4 have been enforceable since February 2, 2025. That means part of the regulation is already live.

Classification is the first step. Without it, you don't know if you're affected, which obligations apply, or where to deploy resources.

The Four Risk Levels at a Glance

The AI Act divides AI systems into four risk categories. The higher the risk, the stricter the requirements.

Prohibited AI Practices (Art. 5)

Certain AI applications are simply banned. These include:

  • Social scoring by public authorities or on their behalf
  • Subliminal manipulation causing harm
  • Exploitation of vulnerabilities (age, disability)
  • Real-time remote biometric identification in public spaces by law enforcement (with narrowly defined exceptions)
  • Emotion recognition in the workplace and educational institutions (with exceptions for safety purposes)

These prohibitions have been in force since February 2, 2025. No transition period remains.

High-Risk AI Systems (Art. 6, Annex I and III)

The most extensive category. High-risk systems are subject to detailed requirements on risk management, data quality, documentation, transparency, and human oversight. More on this in detail shortly.

Limited Risk Systems (Art. 50)

Certain AI systems must meet transparency obligations without being classified as high-risk. Specifically:

  • AI systems that interact directly with people (chatbots): users must know they're communicating with AI
  • Systems that generate synthetic content (deepfakes, AI-generated text, images, audio): content must be marked as AI-generated
  • Emotion recognition systems and biometric categorization: affected persons must be informed

Minimal Risk

All other AI systems. No specific obligations under the AI Act, but the general requirements for AI literacy under Art. 4 still apply. This includes spam filters, AI-powered product recommendations, or automatic translations.

Step-by-Step: Is Your System High-Risk?

Classification as a high-risk system follows a clear assessment schema under Art. 6 of the AI Act. Two paths lead to high-risk classification.

Path 1: AI as Safety Component in Regulated Products (Art. 6(1), Annex I)

If your AI system is a safety component of a product falling under the EU harmonization legislation listed in Annex I, and that product requires third-party conformity assessment, then the AI system is high-risk.

Examples: AI control in medical devices, AI in elevators, AI components in machinery. For these systems, obligations apply from August 2, 2027.

For most companies in the compliance and legal ops space, Path 2 is more relevant.

Path 2: Standalone High-Risk Systems Under Annex III (Art. 6(2))

Annex III lists eight areas where AI systems are classified as high-risk. Obligations for these take effect from August 2, 2026. Here are the categories with examples you'll encounter in practice:

1. Biometrics (Annex III No. 1) Remote biometric identification (not real-time), biometric categorization based on sensitive attributes, emotion recognition. *Example: Your access control system uses facial recognition to identify employees.*

2. Critical Infrastructure (Annex III No. 2) AI as a safety component in the management and operation of critical digital infrastructure, road traffic, water, gas, heating, and electricity supply. *Example: An AI system that optimizes power grid operations and controls load distribution.*

3. Education and Vocational Training (Annex III No. 3) AI systems that determine access to education, evaluate learning outcomes, or assess educational levels. *Example: An examination system that automatically grades tests and assigns marks.*

4. Employment and Workers Management (Annex III No. 4) This is where it gets real for many companies. Affected are AI systems for: - Recruitment and pre-selection of candidates - Decisions about promotion, termination, task allocation - Performance monitoring and evaluation of employees

*Example: Your recruiting tool that pre-filters applicants and creates rankings. Or the performance management system that aggregates employee evaluations and makes recommendations for promotions.*

5. Access to Essential Services (Annex III No. 5) AI systems that assess the creditworthiness of natural persons, perform risk assessments for life and health insurance, or are used in the allocation of public social benefits. *Example: Your scoring model that automatically evaluates consumer loan applications.*

6. Law Enforcement (Annex III No. 6) Risk assessment for victims, lie detectors, evaluation of evidence reliability, recidivism predictions.

7. Migration, Asylum, and Border Control (Annex III No. 7) Risk assessments, support in application processing, identification.

8. Administration of Justice and Democratic Processes (Annex III No. 8) AI systems that assist in the interpretation of facts and application of law, or could influence elections.

The Exception: Art. 6(3) - When High-Risk Isn't Really High-Risk

This point is often overlooked in practice. Art. 6(3) provides that an AI system is not classified as high-risk despite belonging to an Annex III area if it does not pose a significant risk to the health, safety, or fundamental rights of natural persons.

This is the case when the AI system meets one of the following conditions:

  • It performs a narrow procedural task
  • It improves the result of a previously completed human activity
  • It detects decision patterns or deviations without replacing human assessment
  • It performs a preparatory task for an assessment that falls under Annex III use cases

Important: The provider must document why the exception applies and provide this documentation to competent authorities upon request (Art. 6(4)). It's not enough to rely on the exception internally. The justification must be robust.

*Example: An AI system that merely sorts applications in HR according to formal criteria (does the applicant have the required qualification yes/no), without ranking or evaluating, could fall under the exception. A tool that evaluates applicants and ranks them does not.*

Provider or Deployer - What's Your Role?

The AI Act distinguishes between providers and deployers. Your obligations depend on which category you fall into.

Provider (Art. 3(3))

You are a provider if you develop or have developed an AI system and place it on the market under your name. The obligations are extensive:

  • Conduct conformity assessment (Art. 43)
  • Establish and maintain risk management system (Art. 9)
  • Ensure data governance (Art. 10)
  • Create technical documentation (Art. 11)
  • Implement automatic logging (Art. 12)
  • Meet transparency obligations (Art. 13)
  • Enable human oversight (Art. 14)
  • Ensure accuracy, robustness, and cybersecurity (Art. 15)

Deployer (Art. 3(4))

You are a deployer if you use an AI system under your own authority. The obligations are leaner, but by no means trivial:

  • Use according to the provider's instructions for use (Art. 26(1))
  • Ensure human oversight by competent personnel (Art. 26(2))
  • Input data must correspond to the intended purpose (Art. 26(4))
  • Monitoring obligation: observe functionality, inform provider of risks (Art. 26(5))
  • Conduct data protection impact assessment where relevant (Art. 26(9))
  • Retain automatically generated logs for at least six months (Art. 26(6))

Warning: If you substantially modify an AI system or place it on the market under your own name, you become the provider yourself - with all associated obligations (Art. 25).

Documentation for High-Risk Systems

If your system is classified as high-risk, you must as a provider create technical documentation according to Art. 11 and Annex IV. This includes:

  • General description of the AI system and its intended purpose
  • Detailed description of the elements and development process
  • Information on monitoring, functioning, and control
  • Description of the risk management system (Art. 9)
  • Description of data governance and training, validation, and test data (Art. 10)
  • Instructions for use for deployers (Art. 13)
  • Description of human oversight measures (Art. 14)
  • Information on accuracy, robustness, and cybersecurity (Art. 15)
  • Description of the quality management system (Art. 17)

As a deployer, you must at minimum retain the logs, follow the instructions for use, and conduct your own fundamental rights impact assessment (Art. 27) if you deploy a high-risk system.

Practical First Steps

Enough theory. What should you do now, concretely?

1. Create an AI Inventory

Capture all AI systems in your organization. And I mean all of them - not just the obvious ones. Think about:

  • Recruiting tools and HR analytics
  • Customer service chatbots
  • Scoring models in credit decisions
  • AI features in existing enterprise software (CRM, ERP)
  • Automated decision systems in case processing
  • AI-based monitoring and security systems

For each system, document: What does it do? Who is the provider? Who uses it internally? What data does it process? What decisions does it influence?

2. Perform Classification

Check each system against the schema:

  1. Does it fall under a prohibition in Art. 5? If yes: shut it down immediately.
  2. Does it fall under Annex III (or Annex I)? If yes: high-risk assessment.
  3. Does the exception under Art. 6(3) apply? If yes: document why.
  4. Do transparency obligations under Art. 50 apply? If yes: implement labeling.
  5. None of the above: minimal risk, just ensure AI literacy per Art. 4.

3. Prioritize

Not everything at once. Prioritize by:

  • Regulatory risk: prohibited practices first (already in force), then high-risk systems
  • Business risk: systems affecting many people or making fundamental-rights-relevant decisions
  • Time urgency: Annex III systems before Annex I systems (2026 before 2027)

4. Clarify Responsibilities

Assign internal owners. The AI Act is not purely a compliance matter. You need IT, Legal, HR, Data Protection, and the business units at the table.

What the Digital Omnibus Could Change

In December 2025, the EU Commission presented the draft "Digital Omnibus" regulation. This proposal - and this must be said clearly: it is a proposal, not law in force - includes simplifying requirements for certain high-risk systems and adjusting individual deadlines.

The proposal is currently going through the legislative process. It's unclear whether and in what form it will be adopted and when it will enter into force.

My recommendation: plan based on current law. If the Digital Omnibus comes and brings relief, great. But don't wait for it. Those who start classification and implementation today will be better positioned in any scenario than those speculating on possible relief.

Conclusion

Classifying your AI systems under the AI Act is not an academic exercise. It's the foundation for everything that follows: risk management, documentation, conformity.

The good news: the assessment schema is clearly structured. Check Art. 5, apply Art. 6, go through Annex III, document exceptions, determine your role. This is doable - even without an army of external consultants.

The less good news: August 2026 is four months away. Start now.

WP

Author

Werner Plutat

Legal Engineer x AI

The Legal Engineer's Daily Brief

AI, Legal Tech & automation insights, 3x per week.

Subscribe

Does this topic affect your organization?

Let's clarify in 30 minutes how to implement these requirements with working technology, not slide decks.

Book a discovery call