Pondering AI

Kimberly Nevala, Strategic Advisor - SAS
undefined
Nov 26, 2025 • 53min

Your Digital Twin Is Not You with Kati Walcott

Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.   
undefined
Nov 12, 2025 • 52min

No Community Left Behind with Paula Helm

Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.  Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.   
undefined
Oct 29, 2025 • 52min

What AI Values with Jordan Loewen-Colón

Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values  AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
undefined
Oct 15, 2025 • 49min

Agentic Insecurities with Keren Katz

Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans. Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection. Related ResourcesArticle: The Silent Breach: Why Agentic AI Demands New OversightState of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ The LLM Top 10: https://genai.owasp.org/llm-top-10/A transcript of this episode is here.   
undefined
Oct 1, 2025 • 51min

To Be or Not to Be Agentic with Maximilian Vogel

Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.  Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.   
undefined
Sep 17, 2025 • 54min

The Problem of Democracy with Henrik Skaug Sætra

Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.   Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.Related ResourcesGoogle Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=enHow to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVLA transcript of this episode is here.   
undefined
Aug 20, 2025 • 47min

Generating Safety Not Abuse with Dr. Rebecca Portnoff

Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn’s critical work by donating here. Related Resources Thorn’s Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/  Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/  Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report  A transcript of this episode is here.
undefined
Aug 6, 2025 • 51min

Inclusive Innovation with Hiwot Tesfaye

Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. Hiwot Tesfaye is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship.  Related Resources#35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot TesfayeA transcript of this episode is here.   
undefined
Jul 23, 2025 • 52min

The Shape of Synthetic Data with Dietmar Offenhuber

Dietmar Offenhuber reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.  Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.  Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.  Related Resources Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390  Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/  Reservoirs of Venice (project): https://res-venice.github.io/ Website: https://offenhuber.net/ A transcript of this episode is here.    
undefined
Jul 9, 2025 • 56min

A Question of Humanity with Pia Lauritzen, PhD

Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.   

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app