

45 - Samuel Albanie on DeepMind's AGI Safety Approach
Jul 6, 2025
Samuel Albanie, a research scientist at Google DeepMind with a focus on computer vision, dives into the intricacies of AGI safety and security. He discusses the pivotal assumptions in their technical approach, emphasizing the need for continuous evaluation of AI capabilities. Albanie explores the concept of 'exceptional AGI' and the uncertain timelines of AI development. He also sheds light on the challenges of misuse and misalignment, advocating for robust mitigations and societal readiness to keep pace with rapid advancements in AI.
AI Snips
Chapters
Transcript
Episode notes
Current Paradigm Continuation Explained
- The current AI development paradigm assumes continued progress using learning and search methods, including transformers and evolutionary algorithms.
- Radical shifts like brain uploads break this assumption, but natural evolutions of neural network architectures do not.
No Human Ceiling Assumption
- The "no human ceiling" assumption means AI can surpass human capabilities on a broad range of tasks.
- Exceptional AGI is defined relative to skilled adults in relevant domains, not the general population.
Approximate Continuity in AI Progress
- Improvements in AI capabilities depend smoothly on inputs like computation and R&D effort but not on calendar time.
- R&D input can accelerate rapidly, causing capabilities to increase swiftly in calendar time while maintaining approximate continuity.