
Scaling Laws Cass Sunstein on What AI Can and Cannot Do
20 snips
Dec 23, 2025 Cass Sunstein, Robert Walmsley University Professor at Harvard and a legal scholar, dives deep into AI’s capabilities and limitations. He discusses when to trust algorithms over human judgment and highlights the distinction between noise reduction and bias in decision-making. Sunstein emphasizes the implications of AI in fields like medicine and law, and addresses how biased training data can skew outcomes. He also explores the unpredictability of social phenomena, advocating for careful consideration of when to delegate decisions to AI.
AI Snips
Chapters
Transcript
Episode notes
Algorithms Quiet Noise And Improve Fairness
- Algorithms can dramatically reduce human noise and sometimes eliminate cognitive biases in repeatable decisions.
- That reliability improves fairness and accuracy in domains like bail or medical triage when training data is appropriate.
Control Generative AI Noise With Temperature
- Generative AI can be noisy but you can reduce variation by lowering sampling temperature.
- Reducing noise trades off creativity for consistency, which may be preferable for judgment tasks.
Noise Reduction Alone Can Be Dangerous
- Removing noise without addressing systematic bias can lock in worse outcomes.
- Best results occur when algorithms both reduce noise and correct cognitive biases in training data.
