AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
What happens to evidence in the world of deepfakes? Does the NY Times have a reasonable complaint about OpenAI training on their content? How do you prevent someone maliciously generating cases at scale? Maura practiced law for 17 years, was a pioneer in eDiscovery, and is now a professor of computer science, and is incredible well-read and thoughtful on all these big questions.
Topics Covered:
(10:38) Why do you need to disclose that you use eDiscovery tools, and not e.g. Google search? Where is that line drawn?
(16:06) Is there a standard test by which people evaluate tools to determine if they should be approved or not?
(17:59) Why was the case about ChatGPT-created citations actually worse than most perceive?
(21:00) How did you then see judges respond to this event?
(23:48) Where will the line be drawn between disallow entirely or just disclose?
(33:02) In the genAI curve of acceptance, where are we?
(35:32) Evidence and deepfakes
(41:28) What will the legal profession look like in this new world?
(44:09) Do publishers have any recourse for models being trained on their data?