Law://WhatsNext

Tom Rice and Alex Herrity
undefined
Nov 7, 2025 • 28min

AI workflows, agents, governance and security

In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.The following pieces ground our discussion:From linear AI-infused workflows to fully agentic - new skills and orchestration challengesLegal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now by Alex Herrity  The Problem with Agentic AI in 2025 by Sangeet Paul Choudary - The original article featuring the canals vs railroads analogy that inspired Alex's piecePrompt Injection Attacks & AI Governance:The Lethal Trifecta for AI Agents by Simon Willison - defining the three dangerous elements that enable prompt injection attacksPrompt Injections as Far as the Eye Can See by Simon Willison - Johann Rehberger's "Month of AI Bugs" research demonstrating widespread prompt injection vulnerabilitiesI Accidentally Became a ChatGPT Surveillance Node by Juliana Jackson - The article Tom and Alex discuss revealing OpenAI's buggy infrastructure leaking private conversationsChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics - Technical breakdown of the ChatGPT prompt leakage issueIf you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential
undefined
Oct 28, 2025 • 43min

AI, Entrepreneurship & Space Law with Memme Onwudiwe

This week we sit down with Memme Onwudiwe for a conversation that starts in a Harvard Law classroom - transitions to his building an AI company before ChatGPT was a thing - and ends up in outer space 🚀Memme co-founded Evisort while at Harvard Law School in 2016, building AI-powered contract intelligence from the Harvard Innovation Lab years before it became mainstream. Workday acquired the company in October 2024, where Memme now serves as an AI Evangelist. Memme returns to Harvard each spring to teach legal entrepreneurship alongside co-founder Jerry Ting, and he’s a published space law scholar whose paper “Africa and the Artemis Accords” examines how emerging nations can secure their stake in the space economy.Key ReferencesAcademic ResearchAfrica and the Artemis Accords — Memme Onwudiwe & Kwame Newton, New Space (2021)Legal FrameworksArtemis Accords — Non-binding bilateral space exploration principles (2020, 55+ signatories)Outer Space Treaty — Foundational UN space law treaty (1967)Moon Agreement — “Common heritage” framework (1979, 18 signatories)OrganizationsHarvard Innovation Labs — Where Evisort was foundedCLOC — Corporate Legal Operations Consortium (6,300+ members)Space Beach Law Lab — Annual space law conference, Feb 24-26, 2026, Long BeachCorporateWorkday-Evisort Acquisition — ~$310M, closed Oct 2024If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com.
undefined
Oct 14, 2025 • 47min

The UX of Legal AI with Nicole Braddick

Nicole Braddick needs no introduction - but if you had to rush one for the purposes of publishing a podcast 👀 you might say she’s the Global Head of Innovation at Factor Law, following the February 2025 acquisition of her company, Theory & Principle, where she served as CEO and Founder. A former trial lawyer who transitioned into legal tech 15 years ago, Nicole has been one of the industry's most persistent advocates for bringing modern design and development practices to legal technology. Her team has worked with leading law firms, legal tech companies, corporate legal departments, non-profits and public sector organisations to build custom solutions focused on user experience - transforming an industry that, when she started, was "purely functional" and "engineering-led" into one where good design is finally recognised as essential.We get into all of that and more during our discussion, and lean in hard for Nicole’s system wide view and perspective on what’s happening at present.. Key TakeawaysNicole advocates that the calculation around build versus buy has fundamentally changed with generative AI. She argues that corporate legal departments should consider getting enterprise accounts with providers like Anthropic or OpenAI and should be building their muscles for developing internal customised solutions rather than defaulting to SaaS products. The proliferation of chatbots in law was appropriate when everyone was experimenting with generative AI, but Nicole believes the industry has overcorrected. Chat interfaces place enormous cognitive load on users who must craft effective prompts, whereas traditional point-and-click UIs make things easier by guiding users through structured workflows. Nicole sees the future as lying in hybrid experiences.While the AI industry races toward autonomous agents, Nicole sounds a cautionary note for legal applications. The entire value proposition of agents is "getting rid of control"- but lawyers have to wrestle with their ethical obligations and duties to control, to check, and to approve. Nicole sees this as a fascinating design challenge: where previous UX best practices focused on removing friction to create seamless experiences, Nicole and her team are actively considering where they must now strategically add friction and interruption points, believing the goal is to prevent lawyers from blindly clicking "yes, yes, yes" while avoiding so much friction that they abandon the tool. If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential
undefined
Sep 23, 2025 • 41min

Visualising Justice: Rule Mapping and the Future of Legal AI with Stephan Breidenbach

We sit down with Stephan Breidenbach, co-founder of the Rulemapping Group and a German scholar who's been quietly revolutionising how we think about law, technology, and democratic governance since the early 2000s.What started as a teaching tool to help law students visualise complex legal reasoning has evolved into something far more ambitious: a comprehensive system for transforming laws into executable code that maintains human oversight while dramatically improving access to justice.Stephan's present work spans three critical areas: decision automation (turning legal rules into fast, transparent systems), rule-based AI (supporting human lawyers with explainable reasoning), and law as code (drafting legislation that's both human and machine-readable from day one).Some of our highlights from the conversation:The Transparency Imperative: "I would never trust an LLM with a legal process because it's confabulating" Stephan declares, highlighting why the Rulemapping approach prioritises explainable AI over black-box solutions. Their system lets human decision-makers see exactly how the AI reached its conclusions – a "zoom in, zoom out" process that mirrors how lawyers naturally think.Democracy-First Technology: Unlike Silicon Valley's "move fast and break things" mentality, Stephan advocates for keeping humans in the loop even when AI becomes more accurate: "I think it's very important for trust in the legal system and therefore in a democratic system that there are human beings, even if they make worse decisions."Access to Justice at Scale: Through real-world deployments like processing 500,000 diesel emission scandal cases and serving as Europe's first certified Digital Services Act dispute resolution body, Rulemapping demonstrates how thoughtful automation can make legal systems accessible to everyone, not just those who can afford lawyers.We also explore the behavioural risks of over-relying on automated systems, the potential for "law as code" to improve democratic participation, and Stephan's vision of embedded law that serves citizens rather than bureaucracy.If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for more of the same.
undefined
Sep 16, 2025 • 39min

Building A Scalable Privacy Function That Matters with Ben Martin

We catch up with Ben Martin, the former Director of Privacy at Trustpilot and author of "GDPR for Startups," who's currently living his best life somewhere in the Estonian wilderness with a camper van, fishing rod, and blessed freedom from subject access requests. Having built privacy programs at high-growth companies like Trustpilot, Ovo Energy, and King Digital Entertainment, Ben brings a refreshingly practical perspective to privacy law that goes way beyond compliance theatre.From his sabbatical perch in the Nordics, he reflects on everything from why GDPR hasn't quite delivered its promised outcomes to how privacy lawyers are uniquely positioned to lead AI governance.What We Cover:The Sabbatical Chronicles: Ben's epic Nordic adventure and why stepping away from work sometimes gives you the clearest perspective on itPrivacy Program Building: Moving from compliance theatre to business enablement, and why good privacy programs start with genuine curiosity about productsGDPR Reality Check: Why the regulation might not have quite yet delivered its intended outcomes and the types of privacy lawyers and approaches Ben sees in practiceAI Governance Evolution: How privacy professionals are naturally stepping into AI oversight roles and what new skills they need to developTechnical Literacy: The importance of understanding what your business actually builds and Ben's practical approach to learning complex technical conceptsKey References:GDPR for Startups - Ben's practical guide to building privacy programs in high-growth companiesField Fisher Privacy Newsletter - Legal developments summary that Ben recommends for staying currentHard Fork Podcast - Ben's go-to for broad tech and AI developmentsLovable - The AI coding platform Ben's been experimenting with to build his habit tracker (and recruit his girlfriend as user number one)If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
undefined
Sep 9, 2025 • 48min

Architecting our Legal Future with Dan Hunter

This week we sat down with Dan Hunter, Executive Dean of the Dickson Poon School of Law at King's College London and serial legal tech entrepreneur. Dan's journey spans academia across three continents, four successful startups (including his current venture GraceView), and decades of research on the cognitive science of legal reasoning. As both an educator training the next generation of lawyers and an entrepreneur building AI-powered legal solutions, he offers a unique dual perspective on the transformation underway across knowledge work.Key Takeaways1. The Learning Paradox: AI Makes Us Feel Smarter While Making Us DumberStudents using large language models consistently perform better on assignments and believe they're learning more - but when the AI is removed, they've retained virtually nothing. This creates a dangerous illusion of competence (sycophantic models propagate this!) that law schools and firms must address through new assessment methods and training approaches.2. We're Heading Toward a "Barbell" Legal ProfessionTraditional pyramid law firm structures will collapse as AI automates much of the work. Dan believes the future involves senior lawyers managing client relationships at the top, AI agents handling routine tasks in the middle, and "legal engineers" swarming around validating AI outputs and steering the models.3. Entry-Level Legal Jobs Are Already DisappearingWe discuss the recent Stanford research "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence" by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, Stanford Digital Economy Lab (2025) - The landmark study using ADP payroll data showing 13% employment decline for young workers in AI-exposed occupations.Interested in more?If you found this episode interesting, please like, subscribe to the show, comment, and share! For more thought-provoking content at the intersection of law and technology, head to our Law://WhatsNext home for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential
undefined
Sep 2, 2025 • 57min

Copyright, Competition, and Content Authenticity in the Age of AI with Dana Rao

We have fun sitting down with Dana Rao (the former General Counsel and Chief Trust Officer at Adobe) - where we cover the implications of AI progress on: regulatory frameworks and geopolitics; copyright law; deepfakes - including content proliferation and authenticity; fair use and Dana’s take on the current class action lawsuits in the US; and Dana’s proposals for a new impressionistic right for creators to stave off the economic harms of their work being imitated. The conversation provided us with a fascinating insight into life at Adobe at the moment the performance of these generative models really began to take-off, and it was clear to us that Dana and his team played a pivotal role in shaping not only what kind of products Adobe went on to develop but how they would be distributed and consumed by their users!This episode draws on Dana's extensive experience at the intersection of technology, law and policy. Here are the key references and cases we discussed:Legal Cases:Andy Warhol Foundation for Visual Arts, Inc. v. Goldsmith, 598 U.S. 508 (2023) -  The Supreme Court case that Dana argues will have an influence in the outcome of AI fair use battles (which are focussed on economic competition between uses)Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. - 1:20-CV-613-SB (D. Del. Feb. 11, 2025) - The "Westlaw case" Dana mentioned where the judge initially ruled for the AI company but changed his mind after better understanding the technologyDana's Policy Work:Senate Judiciary Committee Testimony (July 12, 2023) - Dana's appearance before the Senate Subcommittee on Intellectual Property hearing titled "Artificial Intelligence and Intellectual Property – Part II: Copyright"Adobe's Proposed Anti-Impersonation Law - Dana's legislative proposal for federal protection against AI-powered style imitationContent Authenticity Standards:Content Authenticity Initiative (CAI) - Adobe-founded initiative with over 5,000 members working to establish content provenance standardsCoalition for Content Provenance and Authenticity (C2PA) - The formal standards organization co-founded by Adobe, Microsoft, Intel, Arm, BBC, and Truepic under the Linux FoundationC2PA Implementation in Google Pixel Phones - Recent adoption of content authenticity standards in consumer devicesIf you found this episode of Law://WhatsNext interesting, please rate, subscribe, comment, and share!
undefined
5 snips
Aug 14, 2025 • 17min

GPT5 - Pt 2 with Sigge Labor (CTO) and Jacob Johnsson (Legal Eng) of Legora

Sigge Labor, CTO at Legora, and Jacob Johnsson, Legal Engineer, dive into the revolutionary capabilities of GPT-5. They discuss how this new model enhances legal reasoning and how their battle evaluations show GPT-5 outperforms other models over 80% of the time. The conversation also explores GPT-5's steerability, enabling more interactive workflows that empower lawyers in their tasks. With real-time insights from one of the fastest-growing AI companies, this chat shines a light on the future of legal tech and its transformative potential.
undefined
Aug 12, 2025 • 23min

GPT5 - Pt 1 with Jake Jones (CPO & Co-Founder, Flank)

Emergency drop: we grabbed Jake Jones (CPO & Co-Founder, Flank) for a quick-fire reaction to OpenAI’s ChatGPT-5 launch. We cover his day-one impressions, what it means for legal products (including Flank), and the downstream implications for how legal work gets done. A short detour from our usual programming—did you enjoy this rapid-response format? If yes, please like, rate, and share to help Law://WhatsNext reach more people.
undefined
Jul 29, 2025 • 49min

The Future Lawyer

In this compelling episode of Law://WhatsNext, hosts Tom & Alex dive into the transformative shifts underway in legal education and junior lawyer development. Joined by three visionary voices - Lucie Allen (Managing Director, Barbri), Rob Elvin (Partner, Squire Patton Boggs), and Sophie Correia (Trainee Solicitor, TravelPerk) - the discussion explores provocative ideas reshaping what it means to be a lawyer.Do Lawyers Even Need to Know the Law?Sophie Correia challenges the traditional emphasis on memorisation and technical rules in legal education. Reflecting on her real-world experiences at a tech scale-up, Sophie argues that success hinges more on human skills such as communication, empathy, and trust-building, rather than recalling obscure statutes.The Flawed Incentives of Legal TrainingRob Elvin sheds light on systemic issues stemming from the billable hour model, which prioritises short-term profitability over effective mentoring. He advocates for a groundbreaking solution: linking career progression directly to the quality of trainee supervision, potentially transforming mentorship from a luxury into an essential career catalyst.The AI DisconnectLucie Allen identifies a critical gap in legal education - the absence of meaningful engagement with AI and technology. Despite these tools reshaping the profession, current frameworks like the SQE neglect to equip trainees adequately for technological realities, posing a substantial risk to their future readiness.Three Ideas to Transform Legal Education:Continuous Learning as the New Norm: Education doesn't stop at qualification. Lucie emphasises the necessity of lifelong learning, driven by relentless curiosity and adaptation to change.Human Skills Set Lawyers Apart: Sophie highlights the enduring value of human-centric capabilities—understanding people, navigating complexity, and ethical reasoning—as indispensable traits lawyers must cultivate.Systemic Change through Collective Responsibility: Rob, Lucie, and Sophie underline the importance of personal agency and collaborative effort in driving substantial reform across education, training, and regulatory frameworks.A Hopeful Path ForwardUltimately, the podcast champions a future in which tomorrow’s lawyers blend ethical judgment, technological proficiency, and interpersonal insight, prompting listeners to reconsider not whether lawyers need to know the law, but rather what precisely they need to know—and how to prepare them best for the evolving landscape.Join us for an inspiring conversation that challenges conventional wisdom and points toward an empowered, adaptable, and human-centred future for the legal profession.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app