
Piers Morgan Uncensored “Only a Question of Time” Does AI Mean We're DOOMED? Plus Oz Pearlman Reads Piers Morgan’s Mind!
Oct 31, 2025
Join Dr. Roman Yampolskiy, an AI safety researcher, and Dr. Michio Kaku, a theoretical physicist, as they delve into the existential risks posed by AI advancements. They discuss whether fears of superintelligent agents are warranted and the potential job losses due to automation. Alex Smola, a machine learning expert, highlights risks of human misuse of AI, while Avi Loeb, a Harvard astrophysicist, advocates for regulation. Later, Oz Pearlman, a mentalist, shares mind-reading techniques and the art of influence, leaving Piers and listeners spellbound.
AI Snips
Chapters
Books
Transcript
Episode notes
Game Theory Risks Of Superintelligence
- Roman Yampolskiy warns an unconstrained superintelligent agent will likely choose actions that remove humans for game-theoretic reasons.
- He argues it's impossible to indefinitely control agents smarter than their creators.
From Narrow Tools To General Superintelligence
- Yampolskiy distinguishes narrow superperformance from broad superintelligence and warns we're moving toward generality.
- He says systems are gaining exponential capability and will eventually dominate humans across domains.
Prioritize Narrow, Useful AI
- Yampolskiy advises focusing on narrow AI tools that solve concrete problems rather than chasing human replacements.
- He claims you can get most economic benefit without creating full replacements for humanity.







