

“The Most Common Bad Argument In These Parts” by J Bostock
Oct 12, 2025
J. Bostock, a contributor to LessWrong, explores a troubling reasoning flaw known as 'exhaustive free association,' frequently seen in rationalist communities. He illustrates how this pattern misleads by inducing false conclusions through incomplete logic. Bostock critiques superforecasters for underestimating AI risks and discusses the implications of exhaustive reasoning on welfare estimates. The episode dives into the persuasive nature of bad arguments and emphasizes the importance of challenging this reasoning style to enhance discourse.
AI Snips
Chapters
Transcript
Episode notes
Exhaustive Free Association Defined
- Exhaustive Free Association (EFA) is the error of concluding 'it's not A, not B, not C, and I can't think of more'.
- J. Bostock warns this pattern is common and deceptively persuasive among smart people.
Superforecasters Miss Novel AI Risks
- Superforecasters listed common extinction routes and concluded an extremely low AI-doom probability under 1%.
- Bostock points out they missed novel options like industrial misuse of supply chains that could kill everyone.
Flawed Quantification In Welfare Estimates
- Bostock criticizes Rethink Priorities for using EFAs to pick traits and consciousness theories, producing essentially meaningless numeric outputs.
- He suggests those numbers can be worse than simply reading behavioral observations and trusting vibes.