

AI, Liability, and Hallucinations in a Changing Tech and Law Environment
8 snips May 15, 2025
Daniel Ho, a leading law professor at Stanford, and Mirac Suzgun, a JD/PhD student focused on AI in law, discuss the integration of AI technology in the legal field. They explore the phenomenon of AI hallucinations, where the tech generates fictitious legal citations, raising serious concerns about accuracy. The conversation delves into the challenges of AI misunderstanding legal precedents, the effects of biased training data, and the need for human oversight. Their insights highlight both the promise and peril of using AI in legal practice.
AI Snips
Chapters
Transcript
Episode notes
AI as Augmentation Tool
- Lawyers who effectively use AI will replace those who don't.
- AI is a tool for augmentation, not replacement of legal professionals.
High Legal AI Hallucination Rates
- State-of-the-art AI models hallucinate legal facts 58% to 88% of the time.
- Even tools marketed as hallucination-free produce false citations one-fifth to one-third of responses.
AI's Fiction vs Reality Challenge
- AI language models mix fictional and non-fictional data in training.
- They struggle to discern legal fiction from actual law, causing hallucinations.