
Pondering AI
How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Latest episodes

Apr 16, 2025 • 54min
Regulating Addictive AI with Robert Mahari
Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research. Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here. Additional Resources:The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/Robert Mahari (website): https://robertmahari.com/

Apr 2, 2025 • 43min
AI Literacy for All with Phaedra Boinodiris
Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI. Phaedra Boinodiris is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here. Additional Resources: Phaedra’s Website - https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/

Mar 19, 2025 • 53min
Auditing AI with Ryan Carrier
Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective. Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets. A transcript of this episode is here. Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight.

5 snips
Mar 5, 2025 • 51min
Ethical by Design with Olivia Gambelin
Olivia Gambelin, a leading AI ethicist and founder of Ethical Intelligence, sheds light on the fusion of ethics and technology. She passionately discusses philogagging and the dangers of contrasting humans with AI, advocating for a values-driven approach in AI development. The conversation touches on cultivating curiosity, accountability in tech, and the importance of emotional intelligence and creativity in humans. Olivia also introduces the Values Canvas as a tool for aligning organizational values with ethical AI practices, inspiring innovation that reflects human intentions.

Feb 19, 2025 • 46min
The Nature of Learning with Helen Beetham
Helen Beetham, an influential educator and consultant on digital education, discusses the future of higher education in the context of AI. She challenges the traditional purposes of learning and highlights the need for diversity in course offerings. Helen critiques the misconception of AI as a panacea, emphasizing the importance of human interaction and critical digital literacy. The conversation also addresses the disparities in education systems and advocates for adaptable teaching methods that recognize students as active participants in their learning journey.

Feb 5, 2025 • 47min
Ethics for Engineers with Steven Kelts
Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.Steven Kelts is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s Responsible University Network. Additional Resources:Princeton Agile Ethics Program: https://agile-ethics.princeton.eduCITP Talk 11/19/24: Agile Ethics Theory and EvidenceOktar, Lomborozo et al: Changing Moral Judgements4-Stage Theory of Ethical Decision Making: An IntroductionEnabling Engineers through “Moral Imagination” (Google)A transcript of this episode is here.

4 snips
Jan 22, 2025 • 46min
Righting AI with Susie Alegre
Susie Alegre, an acclaimed international human rights lawyer and author, champions the prioritization of human rights in the age of AI. She discusses the critical intersection of AI and the Universal Declaration of Human Rights, advocating for legal protections and access to justice. The conversation delves into the ethical minefield of AI regulation, the dangers of companion AI, and the implications for human relationships. Alegre also highlights the need for creativity and cultural heritage protection, urging society to prioritize people over technology.

Jan 8, 2025 • 59min
AI Myths and Mythos with Eryk Salvaggio
Eryk Salvaggio articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art. Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. Eryk Salvaggio is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the Siegel Family Endowment. Eryk is also a researcher on the AI Pedagogies Project at Harvard University’s metaLab and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering. Addition Resources: Cybernetic Forests: mail.cyberneticforests.com The Age of Noise: https://mail.cyberneticforests.com/the-age-of-noise/ Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/ A transcript of this episode is here.

Dec 18, 2024 • 47min
Challenging AI with Geertrui Mieke de Ketelaere
Geertrui Mieke de Ketelaere reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors. Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the Safe AI Companion Collective; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond. A transcript of this episode is here. Geertrui Mieke de Ketelaere is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: www.gmdeketelaere.com

Dec 4, 2024 • 48min
Safety by Design with Vaishnavi J
Vaishnavi J respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset. Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth. A transcript of this episode is here. Vaishnavi J is the founder and principal of Vyanams Strategies (VYS), helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design. Additional Resources: Monthly Youth Tech Policy Brief: https://quire.substack.com
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.