RegulatingAI Podcast: Innovate Responsibly

Sanjay Puri
undefined
Aug 8, 2025 • 45min

Can Constitutional Law Protect Us From AI? | Prof. Raquel Brízida Castro | RegulatingAI Podcast

The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. 📌 Topics Covered: ~ The EU AI Act’s categorisation of risk – does it go far enough? ~ The collision between data sovereignty, latency, and user rights ~ Why current legal remedies like GDPR aren't enough for generative AI ~ Does the Brussels effect stand a chance against the Washington effect? ~ Will national courts lose relevance in the age of EU digital regulation? ~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently. 🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies. Resources Mentioned: https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/ ⏱️ Timestamps: 0:00 Introduction to the podcast and guest, Raquel Brízida Castro 2:21 Magnificent Introduction 2:58 The EU AI Act from a Constitutional Law Perspective 3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law 5:59 New Fundamental Rights in the AI Age 8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm 11:34 Is the EU AI Act's Risk-Based Approach Adequate? 12:05 The Impact of AI on Fundamental Rights 14:52 Regulation vs. Bureaucracy and Self-Regulation 16:26 The Implementation of the AI Act and its Challenges 21:58 The EU vs. US Approach: Regulation vs. Innovation 23:55 The False Dilemma Between Regulating and Innovation 27:09 The Washington Effect 30:51 Implications for American Companies in Europe 31:49 Digital Sovereignty and the Problem of Latency 35:28 Constitutional Safeguards and Regulatory Overreach 35:40 The Primacy of European Law and the Role of Constitutional Courts 38:58 The Two-Year Moratorium on the EU Act 40:30 Lightning Round of Questions 43:24 Final thoughts
undefined
Aug 6, 2025 • 21min

Trump's AI Action Plan Decoded: Fair Use, Export Controls & US-China Competition with Joshua Geltzer

🚨 BREAKING: Former Deputy White House Counsel's Latest Interview on Trump's AI Strategy In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden, to discuss the details behind America's new AI Action Plan. This is the definitive breakdown every tech executive, investor, and policymaker need to watch. 🎯 CRITICAL TAKEAWAYS: Why some states may LOSE federal AI funding  How fair use laws could save AI companies billions  The infrastructure revolution coming to your state  Export control politics that will reshape global tech  Why Trump chose to back open source About the Guest: Joshua Geltzer is a partner at Wilmer Hale focusing on AI, cybersecurity, and national security litigation. Until January 2025, he served as Deputy Assistant to the President, Deputy White House Counsel, and Legal Adviser to the National Security Council. Resources Mentioned: https://www.linkedin.com/in/joshua-geltzer-6209b3198/ https://www.wilmerhale.com/en/people/joshua-geltzer  ⏱️ Timestamps: 0:00 Introduction to the podcast and guest Joshua Geltzer  4:29 Welcome to Regulating AI: The Podcast  5:44 The Three Pillars of the AI Action Plan  6:37 Fair Use, Training Data, and the Courts  8:17 Power, Land, and Permitting for Data Centers  10:19 Countering Synthetic Media and Deepfakes  11:45 The Effectiveness and Limitations of Export Controls  13:39 Leading International AI Governance While Prioritizing National Dominance  15:28 Federal-State Dynamics in AI Governance  19:00 The Open Source vs. Closed Model Debate  20:45 The Competitive Framing with China and National Security  22:54 Global AI Regulation and the Future  23:41 Concluding the discussion 
undefined
Aug 4, 2025 • 34min

The Security Risks in America’s AI Action Plan – Rob T. Lee | RegulatingAI Podcast

In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court. What you’ll learn: Why Rob believes America’s AI systems are already under attack How adversaries are leveraging generative AI without regulatory constraints Why current cybersecurity approaches are inadequate for AI-based threats The challenge of balancing speed with safety in federal AI deployments Insights into critical gaps in open-source model evaluations This conversation is a wake-up call for regulators, enterprise leaders, and anyone navigating AI implementation at scale. Resources Mentioned: Rob T. Lee, Chief of Research and Chief AI Officer, SANS Institute https://www.linkedin.com/in/leerob/ Substack: https://robtlee73.substack.com/  X: https://x.com/robtlee  YouTube: https://www.youtube.com/@RobLee96 
undefined
Jul 31, 2025 • 23min

Peter Sands & Sania Nishtar on Revolutionizing Global Health | AI for Good

In this compelling ‘AI for Good’ panel, moderator Sanjay Puri brings together two of the most influential voices in global health: Peter Sands, Executive Director of The Global Fund, and Sania Nishtar, CEO of Gavi, the Vaccine Alliance. Together, they dive deep into the transformative power of artificial intelligence in addressing some of the world’s most pressing health challenges. From improving disease surveillance and accelerating vaccine delivery to enhancing decision-making in underserved regions, this discussion highlights the real-world impact and ethical considerations of AI in global health.   Key topics covered: How AI is being applied to strengthen health systems globally Real-life examples of AI driving change in low- and middle-income countries The role of public-private partnerships in scaling AI for health Challenges around data, equity, and governance in AI adoption Whether you're a policymaker, health professional, technologist, or simply interested in how AI can serve humanity, this conversation offers critical insights and bold visions for the future. 🔔 Don’t forget to like, comment, and subscribe for more discussions at the intersection of technology and social impact. #AIforGood #GlobalHealth #PeterSands #SaniaNishtar Resources Mentioned: https://www.linkedin.com/in/sania-nishtar-bb2a8123a https://www.linkedin.com/in/peter-sands-0808bb6b 
undefined
Jul 18, 2025 • 7min

11,000 Attendees, 169 Countries: Inside AI for Good Summit 2025 | Frederic Werner | RegulatingAI Podcast

In this special episode of the Regulating AI Podcast, live from the AI for Good Summit in Geneva, host Sanjay Puri sits down with Frederic Werner, Chief of Strategy and Operations at AI for Good, to explore how the initiative has evolved into a global movement touching every corner of society. 🔍 Key topics discussed: The origin and growth of AI for Good Why AI for Good is more than just a summit—it’s a year-round platform Building community and capacity through inclusivity and innovation Engaging youth, startups, governments, and NGOs alike AI for Good’s partnerships with 53 UN sister agencies 🧠 Whether you're in policy, tech, education, or just AI-curious, this episode will show how AI can be a force for equity and progress. 📢 Subscribe for more deep dives into AI policy, governance, and innovation. Resources Mentioned: https://www.linkedin.com/groups/8567748/  https://x.com/FredericWerner  https://www.linkedin.com/in/fredericwerner/ 
undefined
Jul 14, 2025 • 29min

How Salesforce Balances AI Innovation with Responsibility | Eric Loeb on Policy & Governance | RegulatingAI Podcast

Can responsibility, innovation, and success truly coexist in the age of AI? Salesforce's Eric Loeb believes they must—and shares how the company is putting that vision into action through agentic AI and values-led governance.💡 You’ll learn:· What agentic AI is and why it changes enterprise workflows· Why AI agents should always augment—not replace—humans· The role of internal governance, "job descriptions" for agents, and ethical oversight· Why shared responsibility will define AI liability in the future· How Salesforce integrates safety, trust, and innovation into every layer of its AI stack📌 A rare look into how one of the world’s most respected tech companies handles AI governance. #AIgovernance #AgenticAI #RegulatingAIResources Mentioned: https://www.linkedin.com/in/eric-loeb-33a86b/
undefined
Jul 11, 2025 • 22min

How David Sinclair uses AI to reverse the effects of aging and develop life-extending drugs | RegulatingAI Podcast

Join us for a groundbreaking conversation with Harvard Professor David A. Sinclair, a global authority on aging and longevity. Live from the ITU AI for Good conference in Geneva, Dr. Sinclair explains how his lab is leveraging AI to identify molecules that may reverse the aging process. In this episode: How generative AI is transforming drug discovery timelines and cost Real-world examples of AI identifying age-reversing molecules The future of age-resetting gene therapies Why Sinclair believes AI labs can now function like pharma companies The urgent need for regulatory reform to accelerate innovation 📣 Support Sinclair’s research: friendsofsinclairlab.org Know our guest: https://davidasinclair.com/  Read his book at: https://www.amazon.com/dp/0008380325 Listen to his podcast: http://www.youtube.com/@LifespanOfficial Resources Mentioned: https://www.linkedin.com/in/hovig-etyemezian-9b33994/ 
undefined
Jul 10, 2025 • 35min

Why AI Degrees May Be Meaningless Without Certification – Dr. Kathleen Kramer, IEEE | RegulatingAI Podcast

Dr. Kathleen Kramer doesn’t hold back. As IEEE President and a renowned professor, she shares blunt truths on AI education and credentials in this powerful RegulatingAI episode from Geneva’s AI for Good Summit. 💥 In this episode: Why saying "I have a master’s in AI" means nothing without recognized standards The importance of grit, resilience, and doing the hard things in education Why certifications—not degrees—are the future of AI talent validation How IEEE's 141-year history positions it to shape tomorrow’s ethical AI What it means to "advance technology for humanity" in a rapidly shifting workforce This is a call to rethink how we educate, certify, and empower the next generation of AI leaders. 🎙️ Real talk. Real insight. Only on RegulatingAI. Resources Mentioned: https://www.ieee.org/kathleen-a-kramer  https://www.linkedin.com/company/ieee/posts/?feedView=all  https://www.facebook.com/IEEE.org 
undefined
Jul 10, 2025 • 22min

How UNHCR Uses AI to Transform Refugee Services with Hovig Etyemezian | RegulatingAI Podcast

In this episode of the RegulatingAI Podcast, we speak to Hovig Etyemezian, Head of Innovation at UNHCR, the UN Refugee Agency. From fieldwork in Mosul to AI-powered systems in Geneva, Hovig shares a compelling narrative of innovation, ethics, and resilience in refugee services. 🎯 Key Takeaways: How UNHCR uses AI to process refugee feedback at scale Why chatbots, messaging apps, and call centers are critical digital tools The balance between automation and “human in the loop” care Refugee-led innovation programs and grassroots solutions Ethical safeguards and the importance of not “parachuting” tech solutions 💬 A conversation that humanizes AI and shows how responsible innovation can restore dignity to displaced communities. Resources Mentioned: https://www.linkedin.com/in/hovig-etyemezian-9b33994/ 
undefined
Jul 4, 2025 • 49min

Nicholas Thompson on Open Source, China, and AI Power Games | RegulatingAI Podcast

In this episode of the RegulatingAI Podcast, host Sanjay Puri is joined by Nicholas Thompson, CEO of The Atlantic, to talk about one of the most pressing issues in AI today: the scraping of content and the future of journalism in an AI-first world.  ✅ Topics covered: The “original sin” of AI companies and scraped content How The Atlantic is navigating AI disruption in publishing Legal and ethical paths forward: lawsuits, licensing, and collaboration Why Thompson thinks AI companies should drive traffic to journalism A peek into The Atlantic's deal with OpenAI 🔍 Nicholas shares candid takes on balancing innovation and fairness and what the future might look like if we don’t course correct. Resources Mentioned: https://www.linkedin.com/in/nicholasxthompson/  ⏱️ Timestamps: 00:00 - Podcast Highlights 02:56 - The "Original Sin" of AI Companies - Data Scraping & Compensation 05:04 - Media vs AI Companies: Finding Fair Value Exchange 08:19 - Recent Court Rulings: Anthropic & Meta Cases Analysis 12:19 - The Future of Search & Web Architecture 16:45 - Generational Impact: How Young People Consume Information 17:47 - Federal vs State AI Regulation Debate 21:47 - What AI Developers Actually Want from Regulation 22:05 - EU AI Act: Over-regulation Concerns 23:45 - Open Source vs Closed Source AI Models 24:43 - China's Open Source AI Strategy & US-China Relations 26:19 - Chip Export Restrictions: Effectiveness & Consequences 28:41 - US-China AI Cooperation Needs 29:08 - The "Job Apocalypse" Debate: Dario vs Jensen 32:48 - Government Role in AI Transition & Retraining 34:43 - The "First Rung" Problem: Entry-Level Jobs at Risk 35:51 - AI Medical Diagnosis: Outperforming Human Doctors 39:07 - AI Companionship: Solution or Danger for Loneliness? 41:10 - The "Westworld" Risk: AI-Powered Social Media Dystopia 42:46 - Key AI Thinkers: Audrey Tang & The Vatican's AI Paper 44:59 - Lightning Round: Quick Takes on AI's Future 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app