FUTURATI PODCAST cover image

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish

FUTURATI PODCAST

CHAPTER

The Problem of Alignment

Jan LeCun: We don't understand how these systems work. It's a very alien process, and it's hard for most people to understand. I know of at least one person who claims to have solved the problem of induction. But even if they have, we don't know that the machines are doing anything analogous to that. That has serious implications for how much we trust its long-term trajectory of behavior. The details matter in the situation.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner