Skip to content
Folt Labs
← Folt Labs/Interactive tool

EU AI Act risk tier classifier.

A three-question classifier we use inside our audits. Built on the AI Act's risk-tier categories and Annex III. Educational — not legal advice.

Interactive Tool

What risk tier does your AI feature fall under?

A three-question classifier we use inside our audits. Built on the AI Act’s risk-tier categories. Educational — not legal advice.

Question 1 of 3

What does this feature do?

Pick the closest match. If multiple apply, pick the most consequential.

Awaiting answers

Answer the questions. The result appears here.

Educational tool — not legal advice. We validate these classifications with your counsel during the audit.


Key dates

Enforcement timeline.

The AI Act entered into force on 1 August 2024 and applies progressively. These are the dates that matter for most EU SaaS products.

  1. 2 Feb 2025

    Prohibited practices enter force

    Article 5 prohibitions apply. Systems deploying manipulative techniques, social scoring, and untargeted biometric scraping are banned in the EU market.

  2. 2 Aug 2025

    GPAI and governance rules apply

    General-purpose AI model obligations begin. Member states must designate competent authorities. Transparency and documentation obligations start biting.

  3. 2 Aug 2026

    High-risk obligations and Article 50 transparency

    The full AI Act applies. Annex III high-risk systems must have conformity assessments, technical documentation, EU-database registration, and CE marking. Transparency disclosures become mandatory for deployer-facing AI.

  4. 2 Aug 2027

    Embedded high-risk systems

    Extended transition closes for high-risk AI systems embedded in regulated products (e.g. medical devices, machinery). Digital Omnibus may shift dates — track closely.


The four tiers

What each tier actually means.

Prohibited

Examples
  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (narrow exceptions)
  • Exploitative manipulation of vulnerable groups
  • Untargeted scraping of facial images for biometric databases
What this means

Cannot be placed on the EU market. Redesign or remove.

High risk

Examples
  • AI used in administration of justice or democratic processes
  • AI used in worker hiring, promotion, or termination
  • AI used in creditworthiness evaluation or essential public-service access
  • AI used in migration, asylum, or border control
  • Safety components of regulated products under EU harmonisation legislation
What this means

Risk-management system, data governance, technical documentation, event logging, human oversight, robustness, CE marking, EU-database registration, conformity assessment.

Limited risk

Examples
  • Generative AI chatbots or copilots
  • AI-generated content shown to end users
  • Emotion-recognition systems (outside safety-critical contexts)
  • Biometric categorisation systems
What this means

Transparency obligations under Article 50 — inform users that they are interacting with AI, label AI-generated or modified content, document prompt and output flows.

Minimal risk

Examples
  • AI-enabled spam filtering
  • Recommendation systems over non-personal data
  • Purely internal analytics and search
  • Video-game AI
What this means

No mandatory AI-Act obligations beyond existing law. Voluntary codes of conduct are encouraged and become evidence of diligence under enforcement.


References: Regulation (EU) 2024/1689 (AI Act), Annex III, Article 5, Article 50. Commission guidance: European Commission — AI Act Service Desk. This page is educational. Final classification requires review of your specific implementation, data flows, and deployment context. We validate classifications with your counsel during an audit.