

Artificial Intelligence and Auto Safety with Phil Koopman – Part 2
Jul 3, 2025
Phil Koopman, a Professor at Carnegie Mellon University and expert in autonomous vehicle safety, dives into the complexities of AI in auto safety. He explores crucial areas like safety engineering and human factors, pondering the roles of language models in this space. The chat also covers the nuanced interactions between human operators and autonomous systems, revealing real-life incidents involving Waymo. Plus, Koopman discusses the challenges of effectively modeling human behavior and the ongoing debate about sensor technologies and training methods for safety.
AI Snips
Chapters
Books
Transcript
Episode notes
Four Pillars Of Embodied AI Safety
- Safe embodied AI requires mastery of four domains: safety engineering, security, machine learning, and human factors.
- Gaps in any one area cause real-world failures in robotaxi deployments.
Do Hazard Analysis First
- Identify hazards and mitigate expected risks rather than assuming absence of bugs equals safety.
- Perform formal hazard analysis as the foundational safety activity.
ML Breaks Traditional Safety Assumptions
- Machine learning violates many traditional safety assumptions because behavior is statistical not deterministic.
- Autonomous systems must manage limits and responsibility previously held by humans.