Future of Life Institute Podcast cover image

Can AI Do Our Alignment Homework? (with Ryan Kidd)

Future of Life Institute Podcast

00:00

Value of interpretability, BCI and moonshots

Ryan discusses room for broad interpretability, skepticism about BCI timelines, and keeping moonshots in the portfolio.

Play episode from 06:56
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app