
1440 Explores Inside the ChatGPT Black Box
6 snips
Nov 20, 2025 Stephen Wolfram, a renowned computer scientist and founder of Wolfram Research, dives deep into large language models (LLMs). He explains LLMs as advanced prediction systems and discusses how they learn from billions of resources online. Wolfram reveals the fascinating mechanics of neural networks, tokens, and the randomness that makes predictions feel natural. He also addresses the phenomenon of AI 'hallucinations' and the limits of LLMs in precise computation. Lastly, they explore the societal implications of AI and its potential to influence human behavior.
AI Snips
Chapters
Books
Transcript
Episode notes
How LLMs Actually Work
- Large language models (LLMs) predict the next token rather than 'think' or 'understand'.
- They learn patterns by turning words into numeric tokens and encoding connections as weights across a neural network.
Weights Encode Language Patterns
- LLMs store information as numeric weights that encode how strongly tokens connect to each other.
- Those weights, learned from massive text corpora, are what the model uses to predict likely continuations.
Randomness Makes Responses Human
- LLMs generate text step-by-step by sampling probable next words from a probability distribution.
- Controlled randomness is added to make outputs more varied and humanlike instead of always picking the single most likely word.



