
Bits & Atomen Kan AI écht tot het einde van de wereld leiden?
9 snips
Nov 14, 2025 In this discussion, Lode Lauwaert, a philosophy professor at KU Leuven specializing in technology and AI ethics, dives into the existential risks posed by superintelligent AI. He explores alarmist views on potential global catastrophe and the nuances of AGI versus narrow AI. Lauwaert highlights the
AI Snips
Chapters
Books
Transcript
Episode notes
Loss Of Control Is A Real Possibility
- Superintelligent AI could become so cognitively advanced that humans lose meaningful control over it.
- Lode Lauwaert argues loss of control is a real possibility even if the AI lacks human-like desires.
Goals Drive Dangerous Instrumental Behavior
- AI systems pursue programmed goals and can take unexpected instrumental paths to achieve them.
- Lauwaert warns banal objectives without proper constraints can lead to harmful, unforeseen behaviours.
The Paperclip Factory Example
- The classic paperclip thought experiment illustrates extreme outcomes from a simple goal.
- Lauwaert recalls Bostrom and Yudkowsky using that example to show how weak constraints can produce extreme behaviour.






