

The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699
12 snips Aug 27, 2024
In this engaging discussion, Peter van der Putten, director of the AI Lab at Pega and an assistant professor at Leiden University, dives deep into the implications of the newly adopted European AI Act. He explains the ethical principles that motivate this regulation and the complexities of applying fairness metrics in real-world AI applications. The conversation highlights the challenges of mitigating bias, the significance of transparency, and how the Act could shape global AI practices, similarly to GDPR's impact on data privacy.
AI Snips
Chapters
Transcript
Episode notes
EU AI Act: Risk-Based Approach
- The EU AI Act takes a risk-based approach, focusing on the potential harm of AI systems.
- It emphasizes ethical principles like transparency, accountability, robustness, and fairness.
Broad Definition of AI Systems
- The EU AI Act defines AI systems broadly, focusing on automated decision-making's impact on users.
- It considers any automated decision-making system, regardless of underlying technology, an AI system.
AI System Objectives and Risk Assessment
- Determine the objective of your AI system and assess the risk of harm.
- Focus on high-risk systems requiring greater scrutiny, such as those impacting access to essential services.