undefined

Rohin Shah

Machine learning researcher at Google DeepMind, focusing on technical AI safety. Author of the Alignment Newsletter.

Top 3 podcasts with Rohin Shah

Ranked by the Snipd community
undefined
16 snips
Sep 2, 2023 • 3h 10min

Four: Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Rohin Shah, a machine learning researcher at DeepMind, discusses the challenges and risks of AI development, including misalignment and goal-directed AI systems. The podcast explores different approaches to AI research, the potential impact of AI on society, and the ongoing debate over slowing down AI progress. They also touch on the importance of public discussion, weaknesses in arguments about AI risks, and the concept of demigods in a highly intelligent future. The chapter concludes with a discussion on alternative work and puzzle design.
undefined
6 snips
Jun 9, 2023 • 3h 10min

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Rohin Shah, a machine learning researcher at Google DeepMind and author of the Alignment Newsletter, shares insights from the cutting-edge world of AI safety. He discusses the pressing issue of aligning AI with human intentions amidst rapid technological advancements. Shah explores the spectrum of opinions on AI risks, highlighting the balance between caution and optimism. He also critiques misconceptions in AI discourse, emphasizes the necessity for ethical frameworks, and reflects on the implications of AI's rapid evolution, urging for informed public engagement.
undefined
Aug 21, 2024 • 19min

“AGI Safety and Alignment at Google DeepMind:A Summary of Recent Work ” by Rohin Shah, Seb Farquhar, Anca Dragan

Join Rohin Shah, a key member of Google's AGI safety team, alongside Seb Farquhar, an existential risk expert, and Anca Dragan, a safety researcher. They dive into the evolving strategies for ensuring AI alignment and safety. Topics include innovative techniques for interpreting neural models, the challenges of scalable oversight, and the ethical implications of AI development. The trio also discusses future plans to address alignment risks, emphasizing the importance of collaboration and the role of mentorship in advancing AGI safety.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app