
80k After Hours
Highlights: #154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters
Sep 12, 2023
Rohin Shah, DeepMind researcher and advocate, explores safety and ethics in AI, potential risks of superintelligent beings, and the need for evaluation in AI progress. They discuss the challenges in AI development and the role of conceptual arguments.
22:45
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Recent advancements in AI have created a sense of urgency among researchers, leading to increased interest in safety measures and alignment work.
- Collaboration between safety and alignment researchers and AI developers is crucial for making AI systems safe to deploy and addressing ethical considerations in the field.
Deep dives
Impact of AI developments on DeepMind's mood
The podcast discusses how recent advancements in artificial intelligence have affected the mood at DeepMind, a top AI lab. The conversation highlights that the increasing pace of AI advances and the growing concern about existential risks from AI have created a sense of urgency among researchers. Additionally, there is a shift in mindset among some individuals who previously focused on machine learning but now recognize the potential for building AGI in the near future. While there are others who are confused about the strong reactions, the overall impact is an increased interest in safety measures and alignment work.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.