AXRP - the AI X-risk Research Podcast cover image

38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future

AXRP - the AI X-risk Research Podcast

00:00

Intro

This chapter features interviews from the Bay Area Alignment Workshop, emphasizing the impactful work of a professor in probabilistic deep learning and AI safety. The discussion highlights their research on neural ODEs and graph neural networks, along with reflections on a recent sabbatical at Anthropic and a sense of community and growth within the workshop.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner