Pondering AI cover image

Pondering AI

Latest episodes

undefined
4 snips
Jan 22, 2025 • 46min

Righting AI with Susie Alegre

Susie Alegre, an acclaimed international human rights lawyer and author, champions the prioritization of human rights in the age of AI. She discusses the critical intersection of AI and the Universal Declaration of Human Rights, advocating for legal protections and access to justice. The conversation delves into the ethical minefield of AI regulation, the dangers of companion AI, and the implications for human relationships. Alegre also highlights the need for creativity and cultural heritage protection, urging society to prioritize people over technology.
undefined
Jan 8, 2025 • 59min

AI Myths and Mythos with Eryk Salvaggio

Eryk Salvaggio articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.   Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI.  Eryk Salvaggio is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the Siegel Family Endowment. Eryk is also a researcher on the AI Pedagogies Project at Harvard University’s metaLab and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.   Addition Resources:  Cybernetic Forests:  mail.cyberneticforests.com The Age of Noise: https://mail.cyberneticforests.com/the-age-of-noise/ Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/  A transcript of this episode is here. 
undefined
Dec 18, 2024 • 47min

Challenging AI with Geertrui Mieke de Ketelaere

Geertrui Mieke de Ketelaere reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.   Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the Safe AI Companion Collective; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.   A transcript of this episode is here. Geertrui Mieke de Ketelaere is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: www.gmdeketelaere.com 
undefined
Dec 4, 2024 • 48min

Safety by Design with Vaishnavi J

Vaishnavi J respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.  Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth.  A transcript of this episode is here. Vaishnavi J is the founder and principal of Vyanams Strategies (VYS), helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design. Additional Resources:  Monthly Youth Tech Policy Brief: https://quire.substack.com 
undefined
Nov 20, 2024 • 48min

Critical Planning with Ron Schmelzer and Kathleen Walch

Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.   The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.  A transcript of this episode is here. Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.  Additional Resources:   CPMAI certification: https://courses.cognilytica.com/ AI Today podcast: https://www.cognilytica.com/aitoday/ 
undefined
Nov 6, 2024 • 42min

Relating to AI with Dr. Marisa Tschopp

Dr. Marisa Tschopp, a psychologist and human-AI interaction researcher, dives into the complex world of human-AI relationships. She discusses the emotional layers involved in AI companionship, especially in mental health contexts. Marisa emphasizes the need for radical empathy and ethical design in AI, while critiquing corporate marketing tactics that may mislead users. The conversation also reflects on technology's impact on social skills and highlights the importance of preserving trust in human connections in an increasingly digital landscape.
undefined
Sep 25, 2024 • 46min

Technical Morality with John Danaher

John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters. A transcript of this episode is here. 
undefined
Sep 11, 2024 • 46min

Artificial Empathy with Ben Bland

Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.A transcript of this episode is here.
undefined
9 snips
Aug 28, 2024 • 50min

RAGging on Graphs with Philip Rathle

Join Philip Rathle, the CTO of Neo4j and author of The GraphRAG Manifesto, as he takes you on a journey through the world of knowledge graphs and AI. He explains how GraphRAG enhances reasoning and explainability in large language models. Philip discusses the importance of graphs in understanding complex systems and their applications in fraud detection and social networks. He also navigates the limitations of LLMs' reasoning abilities and highlights the advantages of integrating graphs into AI for better decision-making and human agency.
undefined
7 snips
Aug 14, 2024 • 59min

Working with AI with Matthew Scherer

Matthew Scherer, Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology, advocates for a worker-led approach to AI adoption. He highlights the risk of bias in AI hiring processes, questioning the reliability of automated decisions. Matthew discusses the balance between safety and surveillance in workplace technologies, stressing the importance of transparency to protect workers' rights. He critiques the notion of innovation as an unqualified good, urging for meaningful regulations that reflect cultural values toward labor.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode