

RegulatingAI Podcast: Innovate Responsibly
Sanjay Puri
Welcome to the RegulatingAI Podcast: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.
Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
Episodes
Mentioned books

Dec 18, 2025 • 41min
Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast
In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology. Camille is directly involved in landmark lawsuits against CharacterAI and OpenAI CEO Sam Altman, placing her at the forefront of debates around AI accountability, AI companions, and platform liability. This conversation examines the mental-health risks of AI chatbots, the rise of AI companions, and why certain conversational systems may pose public-health concerns, especially for younger and socially isolated users. Camille also breaks down how AI governance frameworks differ across U.S. states, Congress, and the EU AI Act, and outlines what practical, enforceable AI policy could look like in the years ahead. Key Takeaways AI Chatbots as a Public-Health Risk Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent mental-health and safety concerns. Regulating Chatbots vs. Foundation Models Why high-risk conversational AI systems require different regulatory treatment than general-purpose LLMs and foundation models. Global AI Governance Lessons What the EU AI Act, U.S. states, and Congress can learn from each other when designing balanced, risk-based AI regulation. Transparency, Design & Accountability How a light-touch but firm AI policy approach can improve transparency, platform accountability, and data access without slowing innovation. Why AI Personhood Is a Dangerous Idea How framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement. Subscribe to Regulating AI for expert conversations on AI governance, responsible AI, technology policy, and the future of regulation. #RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks #AICompanions Resources Mentioned: https://www.linkedin.com/in/camille-carlton https://www.humanetech.com/ https://www.humanetech.com/substack https://www.humanetech.com/podcast https://www.humanetech.com/landing/the-ai-dilemma https://centerforhumanetechnology.substack.com/p/ai-product-liability https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai

Dec 10, 2025 • 36min
Karin Stephan on Building Emotionally Intelligent Technology | RegulatingAI Podcast
In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being. With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress. Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms. Key Takeaways: • AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech. • Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust. • Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input. • Privacy by Design: How ethical AI can protect privacy without compromising impact. • Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code. If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity. Resources Mentioned: https://www.linkedin.com/in/karinstephan/

Dec 5, 2025 • 52min
The Human Side of Machine Intelligence: Jeff McMillan on AI at Morgan Stanley – RegulatingAI Podcast
In this episode of RegulatingAI, host Sanjay Puri sits down with Jeff McMillan, Head of Firmwide Artificial Intelligence at Morgan Stanley. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness generative AI responsibly striking the right balance between innovation, governance, and ethics. Key Takeaways: AI Governance: Why collaboration across business, legal, and compliance is the foundation of effective AI oversight. Human-in-the-Loop: Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision. Education First: Jeff’s golden rule—spend 90% of your AI budget training people before building tech. AI as a Risk Mitigator: How AI can actually strengthen compliance and risk management when designed right. Culture Over Code: Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership. If you enjoyed this conversation, don’t forget to like, share, and subscribe to RegulatingAI for more insights from global leaders shaping the future of responsible AI. #RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI Resources Mentioned: https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/ Recent Podcast https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849 Morgan Stanley External Facing Website sharing some of the work we are doing on AI https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team

Nov 27, 2025 • 28min
Trump’s AI Executive Order vs California: Senator Scott Wiener Responds | RegulatingAI Podcast
In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States. As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance. Senator Wiener is the author of: • SB 1047 – California’s proposed liability bill for high-risk AI systems • SB 53 – California’s new AI transparency law, now in effect We dive deep into: • The battle between federal vs. state AI regulation • Why California remains the frontline of AI governance • The real impact of Trump’s AI executive order • Growing risks of AI-driven job displacement • How governments can balance innovation with public safety • The future of responsible and accountable AI development 🔑 KEY TAKEAWAYS 1. California’s Policy Power California’s tech dominance allows it to shape national and global AI standards even when Congress stalls. 2. SB 1047 vs. SB 53 Explained SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices. 3. Why Transparency Won After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53. 4. AI Job Disruption Is Accelerating Senator Wiener warns that workforce displacement from AI is happening faster than expected. 5. A Realistic Middle Path He advocates for smart AI guardrails — avoiding both overregulation and total deregulation. If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance. Resources Mentioned: https://www.linkedin.com/company/ascet-center-of-excellence https://www.linkedin.com/in/james-h-dickerson-phd

Nov 20, 2025 • 25min
#141 Inside AI Policy with Congresswoman Sarah McBride | RegulatingAI Podcast with Sanjay Puri
In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race. From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone. Here are 5 key takeaways from the conversation: 💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation. 🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility. 👩💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it. 🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy. 🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership. If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence. Resources Mentioned: mcbride.house.gov https://mcbride.house.gov/about

Nov 7, 2025 • 16min
Small Nations & Big AI Ideas
Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.In this episode, I sit down with Armenia's Minister of Finance to discuss:~ Why Nvidia is building a massive AI factory in Armenia~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies~ The secret advantage: abundant energy + Soviet-era engineering talent~ Is the AI investment boom a bubble or the real deal?~ How AI is already being used in tax collection and government services~ The peace agreement with Azerbaijan and what it means for tech investment~ Why the "Middle Corridor" could make Armenia the next tech destinationThe Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.About the Guest:Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation💬 Leave a comment: What surprised you most about Armenia's AI strategy?🔔 Hit the bell to catch our next episode

Oct 30, 2025 • 40min
Why the World Needs a UN AI Agency with Dr. Mark Robinson | RegulatingAI Podcast
In this episode of RegulatingAI, host Sanjay Puri welcomes Dr. Mark Robinson — Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty. 5 Key Takeaways Why massive global science collaborations like ITER offer lessons for AI governance. The case for a UN-backed International AI Agency to coordinate regulation. How U.S.–China cooperation could unlock a global framework for AI oversight. The risks of leaving governance solely to fragmented national initiatives and big tech. Why timing, leadership, and inclusivity (including the Global South) are critical to shaping AI’s future. If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to RegulatingAI for more global perspectives on building a trustworthy AI future. Resources Mentioned: https://iaia4life.org/ https://www.linkedin.com/in/mark-robinson-3594132b/

Oct 24, 2025 • 30min
Regulation Meets Revolution: Africa’s AI Story Ft. Dr. Nick Bradshaw | RegulationAI Podcast
🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.Resources Mentioned: https://www.linkedin.com/in/nickbradshaw/

Oct 15, 2025 • 27min
Governor Matt Meyer on Building America’s First AI-Ready State | RegulatingAI Podcast
In this episode of the RegulatingAI Podcast, host Sanjay Puri had an engaging podcast with Governor Matt Meyer, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand. 5 Key Takeaways: AI as a Tool, Not Destiny: Governor Meyer emphasizes that AI’s value lies in how it improves lives—not in the technology itself. First to Value, Not First to Hype: Delaware is piloting and scaling AI responsibly, ensuring guardrails before mass adoption. Workforce First: With OpenAI certification programs, Delaware is leading in preparing workers and students for the AI-powered economy. Balancing Innovation & Regulation: The state’s AI sandbox offers a safe testbed for companies to experiment responsibly. Protecting Democracy & People: From tackling election deepfakes to ensuring job transitions, Meyer highlights human-centered governance. If you found this conversation insightful, don’t forget to like, comment, share, and subscribe to the RegulatingAI Podcast for more expert perspectives on the future of AI. Resources Mentioned: https://www.linkedin.com/company/governor-delaware-matt-meyer/ https://governor.delaware.gov/ https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/ https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/

Oct 9, 2025 • 38min
Protecting Children from AI Exploitation with Attorney General Mike Hilgers | RegulatingAI Podcast
In this episode of RegulatingAI, Sanjay Puri speaks with Nebraska Attorney General Mike Hilgers, who is leading efforts to combat AI-enabled child exploitation. You’ll learn: Why AI-generated CSAM (child sexual abuse material) presents unprecedented risks How Nebraska passed LB 383 to prohibit AI-generated CSAM The challenges of prosecuting AI crimes compared to traditional crimes Why bipartisan coalitions matter in AI governance How innovation and child protection can coexist in law and policy Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies. Resources Mentioned: https://www.linkedin.com/company/nebraska-department-of-justice


