FUTURATI PODCAST cover image

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish

FUTURATI PODCAST

00:00

The Problem of Alignment

Jan LeCun: We don't understand how these systems work. It's a very alien process, and it's hard for most people to understand. I know of at least one person who claims to have solved the problem of induction. But even if they have, we don't know that the machines are doing anything analogous to that. That has serious implications for how much we trust its long-term trajectory of behavior. The details matter in the situation.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app