

Mistakes were made? A critical look at how EA approached AI safety | David Krueger | EAG London 23
Jun 17, 2023
David Krueger discusses his research on deep learning, AI alignment, and AI existential safety. He examines the AI safety community's approach to the issue, highlighting the limitations of a techno-solutionist mindset. Krueger emphasizes the importance of collaboration with academia and civil society, and the trade-offs of working with big tech companies. He explores the concept of AI safety in terms of structural risk and expresses concern about individuals' negligence in understanding existential risks. The chapter also delves into the challenges of predicting threat models and discusses the unfolding of AI's existential risks. Krueger suggests the idea of regulating compute in AI systems and emphasizes the need for a global consortium to control advanced compute's supply chain. He also explores the concept of emergent agencies in AI algorithms and the importance of AI governance research and informing policies.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Introduction
00:00 • 2min
Approaching AI Safety and Existential Risks
01:43 • 7min
AI Safety: Thoughts, Ideas, and Eight Important Concepts
08:50 • 6min
Approaching AI Safety: Structural Risk and the Gray Area
14:21 • 3min
The Challenges of Predicting Threat Models and the Unfolding of AI's Existential Risks
16:57 • 10min
Regulating Compute and Global Consortiums
26:54 • 25min
Emergent Agencies and AI Governance
51:31 • 5min