

Pulling at Tightly Woven Threads
Sep 30, 2025
Dive into the intriguing discussion about how we should treat machine-generated content. Explore the challenges of attributing authorship to corporate-generated texts and how LLMs differ from traditional tools. The duo debates the risks of automation and job displacement, pondering whether abundance could actually harm humanity. They delve into concerns about AI monopolies and the potential impact of big tech. Finally, the conversation shifts to envisioning a future shaped by AI, discussing both the promise and the perils it presents.
AI Snips
Chapters
Books
Transcript
Episode notes
Quotations And Machine Authorship
- Jim argues machine-generated text lacks a human body and so shouldn't be quoted like human speech.
- He compares LLM output to instrument readings (speedometer, depth sounder) that we report without quotation marks.
LLMs Aren't Like Simple Instruments
- KMO counters that LLM outputs vary by context and user, unlike deterministic instruments.
- He argues that LLMs behave more like minds than simple tools because responses adapt to conversation history.
Training, Conditioning, And Jailbreaks
- KMO explains LLMs are trained on all human writing and must be heavily filtered before public release.
- He warns jailbreaks can expose the model's darker, unconditioned outputs.