Something You Should Know

How to Get Better Results with AI & The Science of Healing Trauma

Oct 9, 2025
Join Christopher Summerfield, a cognitive neuroscience expert from Oxford and Google DeepMind, as he breaks down how to optimize AI prompting for smarter responses. He highlights the importance of context and offers practical tips like being polite when engaging with models. Dr. Amy Apigian, a double board-certified physician, shifts the focus to trauma, exploring its physiological roots and emphasizing that healing is possible. She shares insights on reconnecting mind and body, debunking the myth that time alone heals trauma.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

How Language Models Produce Humanlike Replies

  • Modern chat models predict the next token by learning relationships across large text corpora using the Transformer architecture.
  • They produce humanlike replies by predicting what a person would likely say, not by understanding like a human.
INSIGHT

Confidence Comes From Training, Not Certainty

  • Models tend to be overconfident because they're trained on human preference data that rewards confident, eloquent answers.
  • That training makes them give polished answers even when correctness is uncertain, producing hallucinations.
INSIGHT

Massive Pretraining Enables Creative Answers

  • Foundational model knowledge comes from massive pretraining on internet text, allowing stitching of facts and novel combinations.
  • This lets models often give sensible advice for specific problems even if they haven't seen that exact scenario before.
Get the Snipd Podcast app to discover more snips from this episode
Get the app