Skip to main content
Attestix
Guides

EU AI Act Risk Classification Guide

How to determine the risk category for your AI system under the EU AI Act (Regulation 2024/1689).

EU AI Act Risk Classification Guide

How to determine the risk category for your AI system under the EU AI Act (Regulation 2024/1689).

Quick Decision Tree

Loading diagram...

Unacceptable Risk (PROHIBITED - Article 5)

These AI systems are banned entirely. Attestix will block creation of compliance profiles for these:

  • Social scoring by governments or on their behalf
  • Manipulation through subliminal, deceptive, or exploitative techniques
  • Exploitation of vulnerabilities (age, disability, social/economic situation)
  • Untargeted scraping of facial images from internet or CCTV
  • Emotion inference in workplace or education (with limited exceptions)
  • Biometric categorization using sensitive characteristics (race, political opinions, etc.)
  • Individual predictive policing based solely on profiling
  • Real-time remote biometric identification in public spaces for law enforcement (with limited exceptions)

High Risk (Annex III Categories)

If your AI system falls under any of these categories, it is high-risk:

CategoryExamples
1. BiometricsFacial recognition, fingerprint matching, voice identification, remote biometric ID
2. Critical infrastructureSafety components in electricity, gas, water, transport, digital infrastructure management
3. Education & trainingStudent assessment, exam scoring, admission decisions, learning path determination
4. EmploymentCV screening, interview assessment, recruitment ranking, termination decisions, performance monitoring
5. Essential servicesCredit scoring, insurance pricing, social benefit eligibility, emergency dispatch prioritization
6. Law enforcementEvidence reliability assessment, recidivism prediction, profiling during investigation, lie detection
7. Migration & borderVisa application assessment, border crossing risk assessment, irregular migration detection
8. Justice & democracyJudicial decision support, legal research interpretation, election/referendum outcome influence

High-Risk Requirements

High-risk AI systems must comply with Articles 8-15:

  • Risk management system (Article 9)
  • Data governance and management (Article 10)
  • Technical documentation (Article 11)
  • Record keeping and automatic logging (Article 12)
  • Transparency and user information (Article 13)
  • Human oversight capability (Article 14)
  • Accuracy, robustness, and cybersecurity (Article 15)
  • Conformity assessment before deployment (Article 43)
  • Registration in EU database (Article 49)
  • Post-market monitoring (Article 72)

Conformity Assessment for High-Risk

Most Annex III high-risk systems can use internal control (self-assessment under Annex VI). However, biometric systems (Category 1) require third-party assessment by a notified body under Annex VII.

Attestix currently requires third-party assessment for all high-risk systems (more conservative than the Act requires). This will be refined in a future release.

Limited Risk (Article 50 Transparency)

These systems must meet transparency obligations:

System TypeRequired Disclosure
AI chatbotsMust inform users they are interacting with AI
Emotion recognitionMust inform subjects their emotions are being inferred
Biometric categorizationMust inform subjects they are being categorized
Deepfake generatorsMust label AI-generated content as artificial
AI-generated text (published to inform on public interest matters)Must label as AI-generated

Minimal Risk

AI systems that do not fall into any of the above categories:

  • Spam filters
  • AI in video games
  • Inventory management systems
  • AI-powered search (internal)
  • Code completion tools
  • Content recommendation (non-manipulative)

No specific regulatory obligations, though voluntary codes of conduct are encouraged.

Common Examples

AI SystemRisk CategoryWhy
Medical diagnosis AIHighCategory 1 (biometrics) or essential services (healthcare)
Credit scoring modelHighCategory 5 (essential services)
Resume screening toolHighCategory 4 (employment)
Customer service chatbotLimitedInteracts with people
AI-generated marketing imagesLimitedSynthetic content generation
Code completion tool (Copilot)MinimalNot in Annex III
Email spam filterMinimalNot in Annex III
AI-powered recommendation engineMinimalUnless manipulative
Facial recognition access controlHighCategory 1 (biometrics)
Autonomous vehicle perceptionHighCategory 2 (critical infrastructure)
Student grading AIHighCategory 3 (education)
AI lie detectorHigh (or Unacceptable)Category 6 (law enforcement)

Choosing Your Risk Category in Attestix

When creating a compliance profile, use one of these values:

create_compliance_profile(
  agent_id="attestix:...",
  risk_category="high",     # "minimal", "limited", or "high"
  provider_name="...",
  ...
)

If you are unsure about your risk category, consult with a legal professional specializing in EU AI regulation. Incorrect classification can result in non-compliance.

Further Reading