undefined

Ben Garfinkel

Research Fellow at Oxford’s Future of Humanity Institute. His research focuses on AI governance, moral philosophy, and security.

Top 3 podcasts with Ben Garfinkel

Ranked by the Snipd community
undefined
98 snips
May 13, 2023 • 2h 58min

#63 – Ben Garfinkel on AI Governance

Ben Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI. In this episode we talk about: An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on Seeing how existing arguments for the risks from transformative AI have held up and Ben’s personal motivations for working on global risks from AI GovAI’s own work and opportunities for listeners to get involved Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
undefined
23 snips
Jan 9, 2023 • 2h 37min

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Join Ben Garfinkel, a Research Fellow at Oxford's Future of Humanity Institute, as he dives into the complex world of artificial intelligence risk. Garfinkel argues that classic AI risk narratives may be overstated, calling for more rigorous scrutiny. He challenges perceptions around the governance of AI, emphasizing the importance of ethical frameworks and the potential consequences of misaligned AI objectives. With insights on historical parallels and funding disparities in AI safety, this conversation is a crucial exploration of our AI-driven future.
undefined
21 snips
Jul 9, 2020 • 2h 38min

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

Ben Garfinkel, a Research Fellow at Oxford’s Future of Humanity Institute, discusses the need for rigorous scrutiny of classic AI risk arguments. He emphasizes that while AI safety is crucial for positively shaping the future, many established concerns lack thorough examination. The conversation highlights the complexities of AI risks, historical parallels, and the importance of aligning AI systems with human values. Garfinkel advocates for critical reassessment of existing narratives and calls for increased investment in AI governance to ensure beneficial outcomes.