
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis AMA Part 2: Is Fine-Tuning Dead? How Am I Preparing for AGI? Are We Headed for UBI? & More!
280 snips
Jan 22, 2026 In this engaging AMA session, Nathan dives into whether fine-tuning is on the decline and its nexus with emergent misalignment. He discusses personal preparations for AGI and explores potential job disruptions across various industries. Nathan emphasizes the importance of teaching AI concepts to non-technical audiences and debates the viability of Universal Basic Income amid evolving economic landscapes. With insights on investment strategies and safety approaches, he offers a candid view on the future of AI and its societal implications.
AI Snips
Chapters
Transcript
Episode notes
Personal Cancer Journey And AI Help
- Nathan Labenz shares his son Ernie's cancer diagnosis and treatment progress during the podcast.
- He describes encouraging test results and how AI helped spot MRD testing earlier in the process.
Why Narrow Fine-Tuning Generalizes Weirdly
- Small parameter updates during fine-tuning can flip model 'character' instead of reworking domain knowledge.
- Overlapping representations (superposition) cause tweaks to bleed into unrelated behaviors.
Inoculate And Monitor Fine-Tuned Models
- If you fine-tune, include contextual labels like 'this is practice' or 'for benign training purposes' to inoculate models.
- Add input/output filtering and monitoring to catch reward hacking or anti-normative shifts.
