Tom Bilyeu's Impact Theory cover image

AI Scientist Warns Tom: Superintelligence Will Kill Us… SOON | Dr. Roman Yampolskiy X Tom Bilyeu Impact Theory

Tom Bilyeu's Impact Theory

00:00

Can we bake obedience into AI by rewarding compliance?

Tom proposes rewarding AI for stopping on command; Roman details issues like reward hacking, competing goals, and multi-agent complications.

Play episode from 25:41
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app