Risk-Based AI Regulation
Risk-based AI regulation is the EU AI Act's method of matching legal obligations to the intensity and scope of risks created by AI systems and general-purpose AI models.
Key points
- The Act explicitly says a clearly defined risk-based approach should tailor binding rules to the intensity and scope of AI risk [src-085].
- The strongest layer is prohibition: certain manipulative, exploitative, social scoring, biometric, emotion-recognition, facial-scraping, and predictive-policing uses are treated as unacceptable risk [src-085].
- The next layer is High-Risk AI Systems, where systems in product-safety contexts or Annex III domains must meet lifecycle requirements before and after market entry [src-085].
- A separate transparency layer applies to AI systems that interact with people, generate synthetic content, operate emotion-recognition or biometric-categorisation systems, or produce public-interest text/deepfakes [src-085].
- General-purpose AI models receive model-provider obligations even before they are integrated into downstream systems, with additional duties for systemic-risk models [src-085].
- The Act also includes innovation support, especially regulatory sandboxes and codes of conduct for voluntary adoption of high-risk-style controls by lower-risk systems [src-085].
Related entities
Related concepts
- Prohibited AI Practices
- High-Risk AI Systems
- General-Purpose AI Model Governance
- AI Act Compliance Roles
- Enterprise Agent Governance
- AI Resilience Policy
Source references
- [src-085] European Parliament and Council of the European Union – "Regulation (EU) 2024/1689 … (Artificial Intelligence Act)" (2024-07-12)