High-Risk AI Systems

High-Risk AI Systems

High-risk AI systems are the EU AI Act category for AI systems whose product context or domain of use can materially affect health, safety, or fundamental rights, triggering lifecycle compliance duties.

Key points

  • Article 6 treats an AI system as high-risk when it is a safety component of a product, or is itself a product, covered by listed Union harmonisation law and subject to third-party conformity assessment [src-085].
  • Annex III also makes systems high-risk in areas such as biometrics, critical infrastructure, education, employment, essential services and benefits, creditworthiness, life and health insurance, emergency dispatch, law enforcement, migration/asylum/border control, justice, and democratic processes [src-085].
  • Some Annex III systems can be excluded if they do not pose significant harm risk and only perform narrow procedural, improvement, pattern-detection, or preparatory tasks, but profiling of natural persons is always high-risk in this category [src-085].
  • High-risk systems require risk management, data governance, technical documentation, logging, deployer-facing transparency, human oversight, accuracy, robustness, cybersecurity, quality management, conformity assessment, CE marking, registration, post-market monitoring, and incident handling [src-085].
  • Deployers carry their own obligations: follow instructions, assign competent human oversight, ensure controlled input data is relevant and representative, monitor operation, retain logs when under their control, inform workers in workplace contexts, and conduct fundamental-rights impact assessments in specified deployments [src-085].

Related entities

Related concepts

Source references

  • [src-085] European Parliament and Council of the European Union – "Regulation (EU) 2024/1689 … (Artificial Intelligence Act)" (2024-07-12)