80,000 Hours Podcast cover image

80,000 Hours Podcast

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

Jul 9, 2020
Ben Garfinkel, a Research Fellow at Oxford’s Future of Humanity Institute, discusses the need for rigorous scrutiny of classic AI risk arguments. He emphasizes that while AI safety is crucial for positively shaping the future, many established concerns lack thorough examination. The conversation highlights the complexities of AI risks, historical parallels, and the importance of aligning AI systems with human values. Garfinkel advocates for critical reassessment of existing narratives and calls for increased investment in AI governance to ensure beneficial outcomes.
02:38:28

Episode guests

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner