
Law://WhatsNext AI workflows, agents, governance and security
In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.
The following pieces ground our discussion:
From linear AI-infused workflows to fully agentic - new skills and orchestration challenges
- Legal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now by Alex Herrity
- The Problem with Agentic AI in 2025 by Sangeet Paul Choudary - The original article featuring the canals vs railroads analogy that inspired Alex's piece
Prompt Injection Attacks & AI Governance:
- The Lethal Trifecta for AI Agents by Simon Willison - defining the three dangerous elements that enable prompt injection attacks
- Prompt Injections as Far as the Eye Can See by Simon Willison - Johann Rehberger's "Month of AI Bugs" research demonstrating widespread prompt injection vulnerabilities
- I Accidentally Became a ChatGPT Surveillance Node by Juliana Jackson - The article Tom and Alex discuss revealing OpenAI's buggy infrastructure leaking private conversations
- ChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics - Technical breakdown of the ChatGPT prompt leakage issue
If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential
