undefined

Rohin Shah

DeepMind researcher and advocate for fairly hearing out both AI doomers and doubters

Top 3 podcasts with Rohin Shah

Ranked by the Snipd community
undefined
16 snips
Sep 2, 2023 • 3h 10min

Four: Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Rohin Shah, a machine learning researcher at DeepMind, discusses the challenges and risks of AI development, including misalignment and goal-directed AI systems. The podcast explores different approaches to AI research, the potential impact of AI on society, and the ongoing debate over slowing down AI progress. They also touch on the importance of public discussion, weaknesses in arguments about AI risks, and the concept of demigods in a highly intelligent future. The chapter concludes with a discussion on alternative work and puzzle design.
undefined
Aug 21, 2024 • 19min

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

Join Rohin Shah, a key member of Google's AGI safety team, alongside Seb Farquhar, an existential risk expert, and Anca Dragan, a safety researcher. They dive into the evolving strategies for ensuring AI alignment and safety. Topics include innovative techniques for interpreting neural models, the challenges of scalable oversight, and the ethical implications of AI development. The trio also discusses future plans to address alignment risks, emphasizing the importance of collaboration and the role of mentorship in advancing AGI safety.
undefined
Sep 12, 2023 • 23min

Highlights: #154 – Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Rohin Shah, DeepMind researcher and advocate, explores safety and ethics in AI, potential risks of superintelligent beings, and the need for evaluation in AI progress. They discuss the challenges in AI development and the role of conceptual arguments.