

Lawfare Daily: Josh Batson on Understanding How and Why AI Works
30 snips May 30, 2025
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier to dive into the mechanics of AI. They unpack two key research papers that illuminate how generative AI models function. The conversation touches on AI's 'black box' nature and the pressing need for transparency in its decision-making. Batson humorously contrasts AI's math skills with traditional methods, discusses ethical dilemmas in AI learning, and emphasizes the importance of interpretability for fostering public trust. A fascinating exploration of AI's role in society!
AI Snips
Chapters
Transcript
Episode notes
AI Models as Black Boxes
- AI models differ from normal software by being "grown" rather than explicitly programmed.
- This makes them more like biological systems and creates a 'black box' challenge in understanding their behavior.
Interpretability vs Explainability
- Interpretability seeks mechanistically correct understanding of AI processes.
- Explainability aims for plausible, human-understandable reasons which may not be strictly accurate.
Why Interpretability Matters
- Understanding AI helps predict when it will succeed, fail, or handle new situations.
- Knowing how AI works allows improving its safety, trustworthiness, and generalization.