Artificiality: Being with AI

Helen and Dave Edwards
undefined
Aug 23, 2025 • 26min

Joscha Bach at the Artificiality Summit 2024

Joscha Bach, a cognitive scientist and AI researcher, delivers a thought-provoking lecture at the Artificiality Summit. He dives into the intricate nature of intelligence and consciousness, proposing that our minds function like an orchestrated system. Bach explores the fascinating intersection of plant communication and AI, raising ethical questions about machine consciousness. He also discusses the dualities in expression and creativity, reflecting on the evolving relationship between AI-generated art and human perception.
undefined
Aug 21, 2025 • 55min

Christine Rosen: The Extinction of Experience

In this conversation, we explore the shifts in human experience with Christine Rosen, senior fellow at the American Enterprise Institute and author of "The Extinction of Experience: Being Human in a Disembodied World." As a member of the "hybrid generation" of Gen X, Christine (like us) brings the perspective of having lived through the transition from an analog to a digital world and witnessed firsthand what we've gained and lost in the process.Christine frames our current moment through the lens of what naturalist Robert Michael Pyle called "the extinction of experience"—the idea that when something disappears from our environment, subsequent generations don't even know to mourn its absence. Drawing on over 20 years of studying technology's impact on human behavior, she argues that we're experiencing a mass migration from direct to mediated experience, often without recognizing the qualitative differences between them.Key themes we explore:The Archaeology of Lost Skills: How the abandonment of handwriting reveals the broader pattern of discarding embodied cognition—the physical practices that shape how we think, remember, and process the world around usMediation as Default: Why our increasing reliance on screens to understand experience is fundamentally different from direct engagement, and how this shift affects our ability to read emotions, tolerate friction, and navigate uncomfortable social situationsThe Machine Logic of Relationships: How technology companies treat our emotions "like the law used to treat wives as property"—as something to be controlled, optimized, and made efficient rather than experienced in their full complexityEmbodied Resistance: Why skills like cursive handwriting, face-to-face conversation, and the ability to sit with uncomfortable emotions aren't nostalgic indulgences but essential human capacities that require active preservationThe Keyboard Metaphor: How our technological interfaces—with their control buttons, delete keys, and escape commands—are reshaping our expectations for human relationships and emotional experiencesChristine challenges the Silicon Valley orthodoxy that frames every technological advancement as inevitable progress, instead advocating for what she calls "defending the human." This isn't a Luddite rejection of technology but a call for conscious choice about what we preserve, what we abandon, and what we allow machines to optimize out of existence.The conversation reveals how seemingly small decisions—choosing to handwrite a letter, putting phones in the center of the table during dinner, or learning to read cursive—become acts of resistance against a broader cultural shift toward treating humans as inefficient machines in need of optimization. As Christine observes, we're creating a world where the people designing our technological future live with "human nannies and human tutors and human massage therapists" while prescribing AI substitutes for everyone else.What emerges is both a warning and a manifesto: that preserving human experience requires actively choosing friction, inefficiency, and the irreducible messiness of being embodied creatures in a physical world. Christine's work serves as an essential field guide for navigating the tension between technological capability and human flourishing—showing us how to embrace useful innovations while defending the experiences that make us most fully human.About Christine Rosen: Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on the intersection of technology, culture, and society. Previously the managing editor of The New Republic and founding editor of The Hedgehog Review, her writing has appeared in The Atlantic, The New York Times, The Wall Street Journal, and numerous other publications. "The Extinction of Experience" represents over two decades of research into how digital technologies are reshaping human behavior and social relationships.
undefined
Aug 16, 2025 • 37min

Beth Rudden: AI, Trust, and Bast AI

Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
undefined
6 snips
Aug 3, 2025 • 35min

Steve Sloman: Information to Bits at the Artificiality Summit 2024

Steve Sloman, a Brown University professor and author renowned for his works on cognition, delves into our evolving understanding of knowledge in a machine-driven world. He questions how AI might shape belief systems, exploring the complex role of narratives in decision-making and the impact of community values on personal beliefs. The conversation challenges our perceptions of information and highlights AI's potential in persuasive communication, all while emphasizing the importance of ethical guidelines in technology.
undefined
Jul 27, 2025 • 42min

Jamer Hunt on the Power of Scale

Jamer Hunt, a professor at the Parsons School of Design and author of 'Not to Scale', shares insights on the transformative power of scale in design and AI. He discusses how different perspectives shape our understanding of intelligence and consciousness, highlighting the importance of cultural contexts. The conversation humorously juxtaposes the micro and macro views of scale, likening insights to a picnic and global perspectives. Hunt also examines how urban complexities and social narratives influence technology, encouraging a reflective dialogue on the role of models in creating meaningful change.
undefined
Jul 12, 2025 • 51min

Avriel Epps: Teaching Kids About AI Bias

In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition.What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners.Key themes we explore:- The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate- Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems- AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners- The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data- Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistanceThroughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment.The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them.For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development.About Dr. Avriel Epps:Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA. Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation.Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness. In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University.Links:- Dr. Epps' official website: https://www.avrielepps.com- AI for Abolition: https://www.ai4.org- A Kids Book About AI Bias details: https://www.avrielepps.com/book
undefined
Jun 7, 2025 • 1h 4min

Benjamin Bratton: The Platypus and the Planetary

In this conversation, Benjamin Bratton, a Professor of Philosophy of Technology and Director at Antikythera, shares his insights on planetary-scale computation. He compares his interdisciplinary work to a platypus, seamlessly blending diverse fields. The discussion dives into how artificial intelligence evolves like biology, highlighting concepts like allopoiesis. Bratton also examines the implications of AI agents on human identity, urging us to rethink individual agency in a tech-driven world. Lastly, he explores the idea of Earth having its own computational consciousness.
undefined
11 snips
Apr 5, 2025 • 1h 16min

David Wolpert: The Thermodynamics of Meaning

David Wolpert, a Professor at the Santa Fe Institute, explores the intricate mathematics of meaning and its implications in a world intertwined with AI. He discusses the shift from syntactic to semantic information, revealing how understanding meaning can reshape our interactions. The conversation delves into the challenges of early AI systems, causal information in economics, and the therapeutic potential of AI. Wolpert emphasizes the importance of knowing the difference between correlation and causation, advocating for AI that genuinely understands context.
undefined
7 snips
Mar 12, 2025 • 1h 10min

Blaise Agüera y Arcas and Michael Levin: The Computational Foundations of Life and Intelligence

Blaise Agüera y Arcas, a Google researcher, and Michael Levin, a Tufts University expert, dive into the fascinating overlap between biology and computation. They reveal how simple rules can produce complex behaviors resembling intelligence, challenging our understanding of life. Levin discusses self-sorting algorithms that mimic adaptive problem-solving, while Agüera y Arcas explores the spontaneous emergence of self-replicating programs. Their groundbreaking insights suggest that information processing is central to both biological and computational systems.
undefined
Mar 7, 2025 • 1h

Maggie Jackson: Embracing Uncertainty

Maggie Jackson, author of the acclaimed book 'Uncertain', discusses the art of embracing uncertainty in our chaotic world. She highlights the neuroscience behind uncertainty, urging listeners to view it not as a fearsome foe but as a catalyst for creativity and adaptability. The conversation spans how AI affects our critical thinking, the dangers of automation bias, and why understanding different types of uncertainty can lead to better decision-making. Jackson’s insights provide a refreshing perspective on leveraging uncertainty as a pathway to growth rather than a roadblock.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app