The Future of Life Institute discusses the core elements of the EU AI Act, including restrictions on prohibited AI systems, high-risk AI systems, and general purpose AI. Topics cover banned AI applications, deployment regulations for high-risk AI systems, integration of AI in emergency services, and obligations for providers of General Purpose AI models with systemic risks.
The AI Act prohibits manipulative AI and outlines criteria for high-risk AI systems, focusing on risk management and oversight.
General Purpose AI models must comply with specific documentation and training data requirements for systemic risk assessment.
Deep dives
AI Act Classifies AI According to Risk
The AI Act classifies AI based on risk levels, with unacceptable risks like social scoring and manipulative AI being prohibited. The majority of obligations in the Act fall on providers and developers of high-risk AI systems, ensuring strict regulations for their deployment. Users, individuals using AI professionally, have obligations, albeit less than providers and developers, especially regarding awareness of interacting with AI. A section in the Act focuses on general-purpose AI (GPAI) and outlines specific requirements for providers handling GPAI models.
Obligations on Providers of High-Risk AI Systems
Providers and developers intending to market high-risk AI systems in the EU, even from third countries, bear the brunt of the obligations outlined in the AI Act. Users deploying such systems professionally also have responsibilities, albeit not as extensive as providers. These obligations apply to users within the EU and third countries utilizing the AI system's output in the EU.
Prohibited AI Systems and High-Risk Designations
The AI Act prohibits certain AI systems, like those using manipulative techniques or exploiting vulnerabilities related to age or disability. It also outlines criteria for high-risk AI systems, defining providers' requirements for risk management, data governance, technical documentation, instructions for use, and human oversight. Additionally, detailed use cases under Annex III specify where high-risk AI systems apply, ensuring comprehensive regulation across various sectors.
Governing GPAI Models and Compliance
General Purpose AI (GPAI) models must comply with specific documentation, copyright directives, and training data summaries. Free and open licensed GPAI models have additional compliance requirements if deemed systemic risks. Providers of systemic GPAI models must conduct evaluations, adversarial testing, incident tracking, and cybersecurity measures, demonstrating compliance with voluntary codes of practice until European Harmonized Standards are introduced.
This primer by the Future of Life Institute highlights core elements of the EU AI Act. It includes a high level summary alongside explanations of different restrictions on prohibited AI systems, high-risk AI systems, and general purpose AI.