General-Purpose AI Model Governance

General-Purpose AI Model Governance

General-purpose AI model governance is the EU AI Act's model-provider layer for foundation-style models that can serve many downstream purposes, with extra duties for models that create systemic risk.

Key points

  • The Act defines general-purpose AI model obligations separately from high-risk AI system obligations, because model providers can shape downstream risks before a model is embedded in a specific application [src-085].
  • Article 53 requires GPAI providers to maintain technical documentation, provide downstream AI-system providers with information on capabilities and limitations, comply with EU copyright law, and publish a sufficiently detailed training-content summary using an AI Office template [src-085].
  • Some open-source GPAI models are exempt from parts of the documentation/information duties when weights, architecture, and usage information are publicly available, but that exception does not apply to systemic-risk models [src-085].
  • Article 51 presumes systemic risk when training compute exceeds 10^25 floating-point operations, while also allowing Commission designation based on equivalent capability or impact criteria [src-085].
  • Article 55 adds systemic-risk duties: model evaluation, adversarial testing, systemic-risk assessment and mitigation, serious-incident reporting, corrective measures, and cybersecurity for the model and its physical infrastructure [src-085].
  • The governance loop relies on codes of practice, harmonised standards, AI Office oversight, Commission enforcement, and fines for GPAI providers that intentionally or negligently breach obligations or fail to cooperate [src-085].

Related entities

Related concepts

Source references

  • [src-085] European Parliament and Council of the European Union – "Regulation (EU) 2024/1689 … (Artificial Intelligence Act)" (2024-07-12)