Pondering AI cover image

Pondering AI

Latest episodes

undefined
Nov 20, 2024 • 48min

Critical Planning with Ron Schmelzer and Kathleen Walch

Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.   The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.  A transcript of this episode is here. Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.  Additional Resources:   CPMAI certification: https://courses.cognilytica.com/ AI Today podcast: https://www.cognilytica.com/aitoday/ 
undefined
Nov 6, 2024 • 42min

Relating to AI with Dr. Marisa Tschopp

Dr. Marisa Tschopp, a psychologist and human-AI interaction researcher, dives into the complex world of human-AI relationships. She discusses the emotional layers involved in AI companionship, especially in mental health contexts. Marisa emphasizes the need for radical empathy and ethical design in AI, while critiquing corporate marketing tactics that may mislead users. The conversation also reflects on technology's impact on social skills and highlights the importance of preserving trust in human connections in an increasingly digital landscape.
undefined
Sep 25, 2024 • 46min

Technical Morality with John Danaher

John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters. A transcript of this episode is here. 
undefined
Sep 11, 2024 • 46min

Artificial Empathy with Ben Bland

Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.A transcript of this episode is here.
undefined
9 snips
Aug 28, 2024 • 50min

RAGging on Graphs with Philip Rathle

Join Philip Rathle, the CTO of Neo4j and author of The GraphRAG Manifesto, as he takes you on a journey through the world of knowledge graphs and AI. He explains how GraphRAG enhances reasoning and explainability in large language models. Philip discusses the importance of graphs in understanding complex systems and their applications in fraud detection and social networks. He also navigates the limitations of LLMs' reasoning abilities and highlights the advantages of integrating graphs into AI for better decision-making and human agency.
undefined
7 snips
Aug 14, 2024 • 59min

Working with AI with Matthew Scherer

Matthew Scherer, Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology, advocates for a worker-led approach to AI adoption. He highlights the risk of bias in AI hiring processes, questioning the reliability of automated decisions. Matthew discusses the balance between safety and surveillance in workplace technologies, stressing the importance of transparency to protect workers' rights. He critiques the notion of innovation as an unqualified good, urging for meaningful regulations that reflect cultural values toward labor.
undefined
Jul 3, 2024 • 50min

Chief Data Concerns with Heidi Lanford

Heidi Lanford connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI. Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.Heidi Lanford is a Global Chief Data & Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data & Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups LiveFire AI and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the Data Leadership Collaborative, and an Advisor to Domino Data Labs and Linea. A transcript of this episode is here.
undefined
Jun 19, 2024 • 59min

Ethical Control and Trust with Marianna B. Ganapini

Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .  A transcript of this episode is here. 
undefined
Jun 5, 2024 • 34min

Policy and Practice with Miriam Vogel

Miriam Vogel, policy and practice innovator, discusses the importance of good AI hygiene, regulatory progress, and boosting literacy and diversity in AI. She emphasizes the need for standardized and context-specific guidance, transparency, and a multi-disciplinary mindset. Vogel highlights the business value of beneficial AI and the importance of AI liability for businesses. She sees regulation as a way to spur innovation and trust, outlining progress in federal AI policies and the need for collective AI literacy. Vogel urges asking critical questions to ensure benefitting from AI opportunities.
undefined
May 22, 2024 • 39min

Learning to Unlearn with Melissa Sariffodeen

Melissa Sariffodeen discusses learning, unlearning, and the impact of AI on education. She emphasizes the human element in digital transformation, the need to view AI as a collaborative partner, and the importance of unlearning outdated skills. Melissa also highlights the role of critical thinking and diverse perspectives in technology adoption.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner