"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Liability for AI Harms: How Ancient Law Can Govern Frontier Technology Risk, with Prof Gabriel Weil

57 snips
Jul 26, 2025
In this engaging discussion, Gabriel Weil, an Assistant Professor of Law at Touro University with expertise in AI liability, shares his insights on harnessing traditional liability law to govern AI development. He argues for using existing negligence and products liability frameworks to hold developers accountable. The conversation dives into real-world scenarios like the Character AI case and voice cloning risks, while proposing the innovative use of punitive damages to incentivize safer AI practices, potentially making risks far costlier for missteps.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Liability Law Scales AI Risk

  • Liability law naturally scales with AI risk, incentivizing safer development based on the dangers firms create.
  • It avoids the need for new legislation by adapting existing legal frameworks like negligence and products liability.
INSIGHT

Negligence and Product Liability Limits

  • Negligence requires showing failure to exercise reasonable care caused actual harm, which is hard with AI's unsolved safety problems.
  • Products liability, especially design defects, is similar and won't greatly differ unless a safer design exists.
INSIGHT

Abnormally Dangerous Activities & AI

  • Abnormally dangerous activities doctrine may apply to frontier AI because of inherent risks despite reasonable care.
  • Courts may initially resist applying this doctrine to software, but it aligns with AI risk realities.
Get the Snipd Podcast app to discover more snips from this episode
Get the app