AITEC Podcast

AITEC
undefined
Oct 3, 2025 • 1h 18min

#22 Iain Thomson: Why Heidegger Thought Technology Was More Dangerous Than We Realize

What if our deepest fears about AI aren't really about the machines at all—but about something we've forgotten about ourselves? In this episode, we speak with philosopher Iain D. Thomson (University of New Mexico), a leading scholar of Martin Heidegger, about his new book Heidegger on Technology’s Danger and Promise in the Age of AI.Together we explore Heidegger’s famous claim that “the essence of technology is nothing technological,” and why today’s crises—from environmental collapse to algorithmic control—are really symptoms of a deeper existential and ontological predicament.Also discussed: – Why AI may not be dangerous because it’s too smart, but because we stop thinking – Heidegger’s concept of “world-disclosive beings” and why ChatGPT doesn’t qualify – How the technological mindset reshapes not just our tools but our selves – What a “free relation” to technology might look like – The creeping danger of lowering our standards and mistaking supplements for the real thingFor more info, visit ethicscircle.org.
undefined
Oct 3, 2025 • 59min

#21 Jayashri Bangali: AI in Education

In this episode, we sit down with Jayashri A. Bangali, a researcher and educator whose work explores the evolving role of artificial intelligence in education—both in India and around the world. We discuss how AI is transforming learning through personalization, interactivity, and accessibility—but also raise hard questions about bias, surveillance, dependence, and deskilling.We dig into Jayashri’s recent research on AI integration in Indian schools and universities, including key findings from surveys of students and teachers across academic levels. We also explore global trends in AI adoption, potential regulatory safeguards, and how policymakers can ensure that AI enhances—not erodes—critical thinking and creativity.This is a wide-ranging conversation on the future of learning, the risks of offloading too much to machines, and the kind of education worth fighting for in an AI-driven world.For more info, visit ethicscircle.org.
undefined
Sep 28, 2025 • 1h 2min

#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology

In this episode, we speak with Bernardo Bolaños and Jorge Luis Morton, authors of On Singularity and the Stoics, about the rise of generative AI, the looming prospect of superintelligence, and how Stoic philosophy offers a framework for navigating it all. We explore Stoic principles like the dichotomy of control, cosmopolitanism, and living with wisdom as we face of deepfakes, algorithmic manipulation, and the risk of superintelligent AI.For more info, visit ethicscircle.org.
undefined
Sep 5, 2025 • 57min

#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You?

In this episode, we speak with Dr. Joshua Hatherley, a bioethicist at the University of Copenhagen, about his recent article, “Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?”Dr. Hatherley challenges what has become a widely accepted view in bioethics: that patients must always be informed when clinicians use medical AI systems in diagnosis or treatment planning. We explore his critiques of four central arguments for the “disclosure thesis”—including risk, rights, materiality, and autonomy—and discuss why, in some cases, mandatory disclosure might do more harm than good.For more info, visit ethicscircle.org.
undefined
Aug 13, 2025 • 1h 8min

#18 Jeff Kane: Why Human Minds Are Not Computer Programs

Philosopher Jeff Kane joins us to discuss his new book The Emergence of Mind: Where Technology Ends and We Begin. In an age where AI writes poems, paints portraits, and mimics conversation, Kane argues that the human mind remains fundamentally different—not because of what it does, but because of what it is. We explore the moral risks of thinking of ourselves as machines, the embodied nature of thought, the deep structure of human values, and why lived experience—not information processing—grounds what it means to be human.
undefined
Jul 24, 2025 • 59min

#17 Caroline Ashcroft: The Catastrophic Imagination

In this episode, we speak with Dr. Caroline Ashcroft, Lecturer in Politics at the University of Oxford and author of Catastrophic Technology in Cold War Political Thought. Drawing on figures like Arendt, Jonas, Ellul, and Marcuse, Ashcroft explores a powerful yet underexamined idea: that modern technology is not just risky or disruptive—but fundamentally catastrophic. We discuss how mid-century political theorists viewed technology as reshaping the environment, the self, and the world in ways that eroded human dignity, democratic life, and any sense of limits.For more info, visit ethicscircle.org.
undefined
Jun 9, 2025 • 1h 1min

#16 Teresa Baron: The Artificial Womb on Trial

Philosopher Teresa Baron joins us to discuss her book The Artificial Womb on Trial. As artificial womb technology edges closer to reality, Baron asks a different question: not just what ectogenesis means for society, but how we ethically get there. From human subject trials to questions of consent, regulation, and reproductive justice, this episode puts the development process itself under the bioethical microscope.For more info, visit ethicscircle.org.
undefined
Jun 5, 2025 • 1h 2min

#15 Stephen Kosslyn: Learning to Flourish in the Age of AI

Stephen Kosslyn, Professor Emeritus at Harvard and CEO of Active Learning Sciences, explores living well in an AI-driven world. He discusses how generative AI can amplify cognition to help set life goals and enhance communication. The conversation delves into flourishing beyond survival, emphasizing emotional intelligence and human connections. Kosslyn also offers techniques for harnessing AI in goal planning and personal motivation, while highlighting the irreplaceable human element in decision-making and the importance of critical thinking in education.
undefined
May 20, 2025 • 58min

#14 Alice Helliwell: The Art of Misalignment

What if the best AI art doesn’t care what we think? In this episode, we talk with philosopher Alice Helliwell about her provocative idea: that future AI might create aesthetic value not by mimicking human tastes, but by challenging them. Drawing from her 2024 article “Aesthetic Value and the AI Alignment Problem,” we explore why perfect alignment isn't always ideal—and how a little artistic misalignment could open new creative frontiers.For more info, visit ethicscircle.org.
undefined
Mar 29, 2025 • 1h 8min

#13: Marianna Capasso: Manipulation as Digital Invasion

In this episode, we speak with Dr. Marianna Capasso, a postdoctoral researcher at Utrecht University, about her 2022 book chapter “Manipulation as Digital Invasion: A Neo-Republican Approach”, which can be found in The Philosophy of Online Manipulation, published by Routledge.Drawing on a neo-republican conception of freedom, Dr. Capasso analyzes the ethical status of digital nudges—subtle, non-intrusive design elements in digital interfaces that gently guide users towards a specific action or decision—and explores when they cross the line into wrongful manipulation. We discuss key concepts like domination, user control, algorithmic bias, and what it means to be free in a digital world.For more info, visit ethicscircle.org. 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app