Interconnects

Why AI writing is mid

56 snips
Nov 17, 2025
The discussion dives into the shortcomings of AI writing, highlighting how models can occasionally produce great sentences but struggle with sustained quality. Structural limits in training methods and market pressures hinder the creation of high-quality prose. The conversation explores why writing is inherently harder than image generation and the need for distinct voice in writing. There's a call for innovative post-training processes to foster personality in AI outputs, emphasizing that achieving great writing requires bold new approaches.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Great Sentences, Poor Sustained Prose

  • AI models often produce occasional great sentences but fail to sustain high-quality prose across multiple sentences.
  • Nathan Lambert argues this is a structural limitation from how models are trained and the markets they serve.
INSIGHT

Training Objectives Suppress Style

  • Post-training objectives prioritize helpfulness, clarity, and honesty, so style rarely becomes a leading objective to optimize.
  • Aggregate preference tuning and payment structures further suppress distinctive voice and deeper writing quality.
INSIGHT

Incentives Favor Brevity Over Depth

  • Good writing often requires more time and complexity, but user needs and paid labeler incentives favor quick, concise outputs.
  • Implicit RLHF biases like length and conformity work against higher-quality, nuanced writing.
Get the Snipd Podcast app to discover more snips from this episode
Get the app