

Law://WhatsNext
Tom Rice and Alex Herrity
How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
Episodes
Mentioned books

Dec 10, 2025 • 35min
Legal Tech Trends with Peter Duffy (Q4 2025)
We're joined by Peter Duffy for our quarterly ritual of dissecting the big headlines of Peter's popular Legal Tech Trends newsletter and ruminating on their potential implications for legal service delivery. Peter returns wide eyed and optimistic fresh off some time in the US, where he enjoyed attending TLTF in Austin.What gets covered:The eternal "legal-specific vs. frontier model" debate — With Gemini 3 dropping and capabilities proliferating into vertical spaces, Peter weighs in on whether specialised legal AI still has an edge.PE is coming for BigLaw — McDermott exploring MSO structures to let private equity in; 20% of UK firms eyeing PE money; we explore the uncomfortable questions: (i) does outside capital corrupt lawyer independence? (ii) does PE change the fabric of the firm and its operation? The vibes have shifted — Wild stat emanating from the PWC Law Firm Survey - Top-100 law firms expecting AI to boost revenue dropped from 69% (2023) to 31% (2025). Meanwhile, in-house teams are having their main character moment with a 24-point jump in AI optimism. Is this gap telling? Product chaos continues — Norm AI spinning up an actual law firm (!), Crosby raising $20M for Slack-native contract review, Legora's client portal coming Q1 2026, and Linklaters designating 20 "AI lawyers" to build workflows.Listen if: You’re worried your LinkedIn feed isn’t giving you enough legal technology news 😂 OR maybe you’re curious to experiment to see what else is going on out there (beyond this platform)? Rate, subscribe, comment, and share if you enjoyed this chat with Peter!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) discussions with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.

Dec 3, 2025 • 20min
Lawyer x AI Builder Jamie Tso
Hot off the heels of breaking legal LinkedIn last week, we caught up with Jamie Tso - a Hong Kong-based lawyer who's been building-in-public and sparking conversations across the legal community with his viral AI creations. This is a watch (don't only listen) episode. Jamie screen-shares his way through Google AI Studio, live-coding lightweight versions of legal tech tools we all know. Jamie walks through his "SpellPage" contract editor (inspired by a novel-writing app, naturally), demonstrates real-time AI-powered redlining, and casually drops the concept of an open-source "legal AI operating system" built from first-principles that could democratise access to common technology workflows we are building to support common practices across legal. His philosophy? The barrier to entry is now so low that sophisticated AI tools "should be free, more or less."Key moments:Live demo of AI-powered contract editing with natural language instructionsWhy Google AI Studio is the ultimate one-stop shop (native API keys, version control, GitHub integration, no coding required)The shift from chatting with AI → AI using tools → AI spinning up mini-apps on the goJamie's vision for consolidating legal workflows into reusable, customisable modulesMust-read context: Check out Jamie's viral posts that sparked this conversation:The contract editor build"Gemini 3 is basically AGI at this point"This is what building in the age of AI looks like - experimental, exhilarating, unnerving and transformative. If you enjoyed this episode with Jamie, please like, subscribe, comment, and share!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.

Nov 27, 2025 • 44min
In Conversation: Two CLOs (Andy Cooke & Sam Ross) on Communication, Performance and Ethics
Alex and Tom step aside for this one—no hosts, no scripts—just Andy Cooke (CLO of Perk) and Sam Ross (CLO of Remote) in conversation. What begins with nursery vomiting bugs quickly evolves into a refreshingly honest exploration of what it means to lead a legal function in disruptive technology companies. They dissect the tension between being "bold, brief, and gone" versus staying in the room to build genuine relationships, challenge the limiting "trusted advisor" archetype, and wrestle with when precision matters less than context and authentic communication. Andy and Sam don't just theorise—they get personal about the moments that test you: deciding whether someone's lying on an expense claim, navigating board dynamics when you're the least financially fluent person in the room, and maintaining ethical standards when the stakes are existential. But this isn't a heavy-handed meditation on professional responsibility. The conversation crackles with levity and self-awareness - from Sam's admission that he deliberately uses humour to "pierce through" hierarchy, to their shared recognition that being fallible and human is part of doing the job right. Both find genuine joy in what they do, drawing energy from learning from others and building networks that pay dividends years later. It's a masterclass in thoughtful leadership wrapped in the warmth of two friends who clearly respect each other's craft - and aren't afraid to acknowledge when they get it wrong. If you enjoyed this episode, please like, subscribe, comment, and share! It gives us a warm fuzzy feeling and helps make our podcast more discoverable to other podcast aficionados! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; and (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.

Nov 19, 2025 • 22min
The AI Content Tsunami with Guy Shahar
Creating content has never been easier. With both LLMs and world models (Sora 2, Veo 3, Marble) the fidelity of what we can produce at the tip of a prompt is getting genuinely scary. Guy Shahar is the CEO and founder of Blee, a Y Combinator-backed AI content compliance platform that helps companies review and oversee marketing materials at scale. Before founding Blee nearly four years ago, Guy led marketing operations at Adobe for five years, and he has been witnessing firsthand the explosion of AI-generated content and deliberating on the implications. We sit down with Guy in this short conversation to discuss: (1) the rising proliferation of AI generated content; (2) the cyber-like threat of deepfakes and bad actor impersonation; and (3) the new opportunities large language and world models present for some of the world's largest brands in how they generate and manage their production of compelling content.Key TakeawaysThe "Content Tsunami" is here and it's only getting bigger - Content creation has exploded with AI, fundamentally changing the speed and volume at which companies can produce marketing materials. What used to take weeks now happens in hours. Guy calls this the "content tsunami" - a relentless wave of content being generated across all digital channels. But the gap between how fast content can be created and how fast it can be safely approved is widening, creating significant risk exposure for companies and their brands.Deepfakes aren't just a detection problem - they're a trust problem - One real danger of deepfakes isn't just that bad actors can create convincing fake content - it's that they're eroding trust in everything we see online. The recent deepfake of Irish presidential candidate Catherine Connolly which went viral in Ireland, which falsely showed her withdrawing from the race just days before the election and remained live for 12 hours, demonstrates how sophisticated and damaging this content has become. AI compliance creates new opportunities for how teams work - While AI-generated content creates new risks, it also opens unprecedented opportunities to transform workflows and team structures. Guy promotes the potential for companies to rethink their entire "content supply chain" - testing 50 or 100 versions of marketing materials instead of just two, delivering hyper-personalised content at scale, and breaking down silos between marketing, legal, GTM and compliance teams. Key References from Our ConversationCatherine Connolly Deepfake Incident: An AI-generated video falsely depicting Irish presidential candidate Catherine Connolly withdrawing from the race surfaced just days before the October 2025 election, viewed nearly 30,000 times over 12 hours before Meta removed it - a stark example of how deepfakes can threaten democratic processes and why rapid content monitoring matters.Content Authenticity Initiative (CAI): an open standard verification system with over 900 member companies working to authenticate digital content and combat deepfakes through content credentials and metadata tracking.Dana Rao: Adobe's former General Counsel and Chief Trust Officer, is mentioned for his perspective on deepfakes and the transition from trying to detect fakes to proving authenticity - Dana appeared on an earlier episode of Law://WhatsNext which you can access here. If you found this episode interesting, please like, subscribe, comment, and share!For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.

Nov 12, 2025 • 46min
The View from the Interface with Kevin Cohn
We sit down with Kevin Cohn, Chief Customer Officer at Brightflag, who occupies one of the most unique vantage points in legal services – the interface between corporate legal departments and their outside counsel. We’re sure any AI start up would pay a premium to understand the work that is going to law firms and the value and time it takes to deliver that, right? By processing billions of dollars in legal invoices, Kevin and his team have unprecedented visibility to spot macro trends – from law firm partner utilisation patterns to staffing changes. In this lively but familiar conversation (we’ve each all known one another for a few years) Kevin reveals some emerging trends very relevant to the technological revolution we are all experiencing. Beyond the predictable conversation about rising law firm rates, Kevin shares two interesting developments the Brightflag team are noticing: increased partner utilisation (which might actually be good news if total hours are decreasing?)something more troubling – what Kevin diplomatically calls "not the most above board" AI-enabled billing practices, with ever more invoices showing suspicious six-minute increments.We also talk about the evolution of skills and relationships in an era when both clients and counsel are being shaped by automation and analytics.Kevin is also our first Law://WhatsNext guest to have an AI version of himself shipped and deployed to give Brightflag customers on-demand access to his expertise on legal operations and spend management. While Kevin Clone can handle questions about invoice review and matter management workflows with ease, we discover its limits when we request an Italian wine pairing. The Clone politely deflects: "I'm here to focus on legal operations and Brightflag." Some things simply can't be replicated by AI. The real Kevin remains irreplaceable.Key ReferencesBrightflag LinkedIn Post — Introducing Kevin CloneAsk Kevin Clone a Question — BrightflagIf you enjoyed this episode, please like, subscribe, comment, and share! It helps more people discover conversations like this. For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.

Nov 7, 2025 • 28min
AI workflows, agents, governance and security
In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.The following pieces ground our discussion:From linear AI-infused workflows to fully agentic - new skills and orchestration challengesLegal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now by Alex Herrity The Problem with Agentic AI in 2025 by Sangeet Paul Choudary - The original article featuring the canals vs railroads analogy that inspired Alex's piecePrompt Injection Attacks & AI Governance:The Lethal Trifecta for AI Agents by Simon Willison - defining the three dangerous elements that enable prompt injection attacksPrompt Injections as Far as the Eye Can See by Simon Willison - Johann Rehberger's "Month of AI Bugs" research demonstrating widespread prompt injection vulnerabilitiesI Accidentally Became a ChatGPT Surveillance Node by Juliana Jackson - The article Tom and Alex discuss revealing OpenAI's buggy infrastructure leaking private conversationsChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics - Technical breakdown of the ChatGPT prompt leakage issueIf you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential

Oct 28, 2025 • 43min
AI, Entrepreneurship & Space Law with Memme Onwudiwe
This week we sit down with Memme Onwudiwe for a conversation that starts in a Harvard Law classroom - transitions to his building an AI company before ChatGPT was a thing - and ends up in outer space 🚀Memme co-founded Evisort while at Harvard Law School in 2016, building AI-powered contract intelligence from the Harvard Innovation Lab years before it became mainstream. Workday acquired the company in October 2024, where Memme now serves as an AI Evangelist. Memme returns to Harvard each spring to teach legal entrepreneurship alongside co-founder Jerry Ting, and he’s a published space law scholar whose paper “Africa and the Artemis Accords” examines how emerging nations can secure their stake in the space economy.Key ReferencesAcademic ResearchAfrica and the Artemis Accords — Memme Onwudiwe & Kwame Newton, New Space (2021)Legal FrameworksArtemis Accords — Non-binding bilateral space exploration principles (2020, 55+ signatories)Outer Space Treaty — Foundational UN space law treaty (1967)Moon Agreement — “Common heritage” framework (1979, 18 signatories)OrganizationsHarvard Innovation Labs — Where Evisort was foundedCLOC — Corporate Legal Operations Consortium (6,300+ members)Space Beach Law Lab — Annual space law conference, Feb 24-26, 2026, Long BeachCorporateWorkday-Evisort Acquisition — ~$310M, closed Oct 2024If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com.

Oct 14, 2025 • 47min
The UX of Legal AI with Nicole Braddick
Nicole Braddick needs no introduction - but if you had to rush one for the purposes of publishing a podcast 👀 you might say she’s the Global Head of Innovation at Factor Law, following the February 2025 acquisition of her company, Theory & Principle, where she served as CEO and Founder. A former trial lawyer who transitioned into legal tech 15 years ago, Nicole has been one of the industry's most persistent advocates for bringing modern design and development practices to legal technology. Her team has worked with leading law firms, legal tech companies, corporate legal departments, non-profits and public sector organisations to build custom solutions focused on user experience - transforming an industry that, when she started, was "purely functional" and "engineering-led" into one where good design is finally recognised as essential.We get into all of that and more during our discussion, and lean in hard for Nicole’s system wide view and perspective on what’s happening at present.. Key TakeawaysNicole advocates that the calculation around build versus buy has fundamentally changed with generative AI. She argues that corporate legal departments should consider getting enterprise accounts with providers like Anthropic or OpenAI and should be building their muscles for developing internal customised solutions rather than defaulting to SaaS products. The proliferation of chatbots in law was appropriate when everyone was experimenting with generative AI, but Nicole believes the industry has overcorrected. Chat interfaces place enormous cognitive load on users who must craft effective prompts, whereas traditional point-and-click UIs make things easier by guiding users through structured workflows. Nicole sees the future as lying in hybrid experiences.While the AI industry races toward autonomous agents, Nicole sounds a cautionary note for legal applications. The entire value proposition of agents is "getting rid of control"- but lawyers have to wrestle with their ethical obligations and duties to control, to check, and to approve. Nicole sees this as a fascinating design challenge: where previous UX best practices focused on removing friction to create seamless experiences, Nicole and her team are actively considering where they must now strategically add friction and interruption points, believing the goal is to prevent lawyers from blindly clicking "yes, yes, yes" while avoiding so much friction that they abandon the tool. If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential

Sep 23, 2025 • 41min
Visualising Justice: Rule Mapping and the Future of Legal AI with Stephan Breidenbach
We sit down with Stephan Breidenbach, co-founder of the Rulemapping Group and a German scholar who's been quietly revolutionising how we think about law, technology, and democratic governance since the early 2000s.What started as a teaching tool to help law students visualise complex legal reasoning has evolved into something far more ambitious: a comprehensive system for transforming laws into executable code that maintains human oversight while dramatically improving access to justice.Stephan's present work spans three critical areas: decision automation (turning legal rules into fast, transparent systems), rule-based AI (supporting human lawyers with explainable reasoning), and law as code (drafting legislation that's both human and machine-readable from day one).Some of our highlights from the conversation:The Transparency Imperative: "I would never trust an LLM with a legal process because it's confabulating" Stephan declares, highlighting why the Rulemapping approach prioritises explainable AI over black-box solutions. Their system lets human decision-makers see exactly how the AI reached its conclusions – a "zoom in, zoom out" process that mirrors how lawyers naturally think.Democracy-First Technology: Unlike Silicon Valley's "move fast and break things" mentality, Stephan advocates for keeping humans in the loop even when AI becomes more accurate: "I think it's very important for trust in the legal system and therefore in a democratic system that there are human beings, even if they make worse decisions."Access to Justice at Scale: Through real-world deployments like processing 500,000 diesel emission scandal cases and serving as Europe's first certified Digital Services Act dispute resolution body, Rulemapping demonstrates how thoughtful automation can make legal systems accessible to everyone, not just those who can afford lawyers.We also explore the behavioural risks of over-relying on automated systems, the potential for "law as code" to improve democratic participation, and Stephan's vision of embedded law that serves citizens rather than bureaucracy.If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for more of the same.

Sep 16, 2025 • 39min
Building A Scalable Privacy Function That Matters with Ben Martin
We catch up with Ben Martin, the former Director of Privacy at Trustpilot and author of "GDPR for Startups," who's currently living his best life somewhere in the Estonian wilderness with a camper van, fishing rod, and blessed freedom from subject access requests. Having built privacy programs at high-growth companies like Trustpilot, Ovo Energy, and King Digital Entertainment, Ben brings a refreshingly practical perspective to privacy law that goes way beyond compliance theatre.From his sabbatical perch in the Nordics, he reflects on everything from why GDPR hasn't quite delivered its promised outcomes to how privacy lawyers are uniquely positioned to lead AI governance.What We Cover:The Sabbatical Chronicles: Ben's epic Nordic adventure and why stepping away from work sometimes gives you the clearest perspective on itPrivacy Program Building: Moving from compliance theatre to business enablement, and why good privacy programs start with genuine curiosity about productsGDPR Reality Check: Why the regulation might not have quite yet delivered its intended outcomes and the types of privacy lawyers and approaches Ben sees in practiceAI Governance Evolution: How privacy professionals are naturally stepping into AI oversight roles and what new skills they need to developTechnical Literacy: The importance of understanding what your business actually builds and Ben's practical approach to learning complex technical conceptsKey References:GDPR for Startups - Ben's practical guide to building privacy programs in high-growth companiesField Fisher Privacy Newsletter - Legal developments summary that Ben recommends for staying currentHard Fork Podcast - Ben's go-to for broad tech and AI developmentsLovable - The AI coding platform Ben's been experimenting with to build his habit tracker (and recruit his girlfriend as user number one)If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.


