Future of Life Institute Podcast cover image

Joe Carlsmith on How We Change Our Minds About AI Risk

Future of Life Institute Podcast

00:00

The Dangers of Misalignment

I think the notion of misalignment is kind of importantly related to the notion of agency and goal pursuit. The way in which I see this as a possible comfort is not that it's impossible to build the agentic scary things. It's sort of less central to the general worry is that the dangerous type of thing is also really, really closely related to the useful type of thing. And so I think it's hard to separate these things too hard or too far, but it's still, I think there's still hope there.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app