

Pondering AI
Kimberly Nevala, Strategic Advisor - SAS
How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Episodes
Mentioned books

Jan 7, 2026 • 58min
An AI Assessment with Chris Marshall
Dr. Chris Marshall analyzes AI from all angles including market dynamics, geopolitical concerns, workforce impacts, and what staying the course with agentic AI requires.Chris and Kimberly discuss his journey from theoretical physics to analytic philosophy, AI as an economic and geopolitical concern, the rise of sovereign AI, scale economies, market bubbles and expectation gaps, the AI value horizon, why agentic AI is harder than GenAI, calibrating risk and justifying trust, expertise and the workforce, not overlooking Rodney Dangerfield, foundational elements for success, betting on AIOps, and acting in teams. Dr. Chris L Marshall is a Vice President at IDC Asia/Pacific with responsibility for industry insights, data, analytics and AI. A former partner and executive at companies such as IBM, KPMG, Oracle, FIS, and UBS, Chris’s mission is to translate innovative technologies into industry insights and business value for the digital economy.Related ResourcesData and AI Impact Report: The Trust Imperative (IDC Research)A transcript of this episode is here.

Dec 24, 2025 • 24min
Perspectives and Predictions: Looking Back at 2025 and Forward to 2026
A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.A transcript of this episode is here.

Dec 10, 2025 • 57min
An Environmental Grounding with Masheika Allgood
Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data. Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do. Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems. Related Resources Taps Run Dry Initiative (Website) Data Center Advocacy Toolkit (Website) Eat Your Frog (Substack) AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning) A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book) A transcript of this episode is here.

Nov 26, 2025 • 53min
Your Digital Twin Is Not You with Kati Walcott
Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.

Nov 12, 2025 • 52min
No Community Left Behind with Paula Helm
Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.

Oct 29, 2025 • 52min
What AI Values with Jordan Loewen-Colón
Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.

Oct 15, 2025 • 49min
Agentic Insecurities with Keren Katz
Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans. Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection. Related ResourcesArticle: The Silent Breach: Why Agentic AI Demands New OversightState of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ The LLM Top 10: https://genai.owasp.org/llm-top-10/A transcript of this episode is here.

Oct 1, 2025 • 51min
To Be or Not to Be Agentic with Maximilian Vogel
Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems. Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.

Sep 17, 2025 • 54min
The Problem of Democracy with Henrik Skaug Sætra
Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward. Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.Related ResourcesGoogle Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=enHow to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVLA transcript of this episode is here.

Aug 20, 2025 • 47min
Generating Safety Not Abuse with Dr. Rebecca Portnoff
Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse. Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable. Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn’s critical work by donating here. Related Resources Thorn’s Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/ Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/ Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report A transcript of this episode is here.


