Artificiality: Being with AI

Helen and Dave Edwards
undefined
42 snips
Feb 5, 2025 • 1h 18min

Michael Levin—The Future of Intelligence: Synthbiosis

Michael Levin, a distinguished professor at Tufts University, explores the fascinating realm of diverse intelligences in biology. He discusses the concept of 'synthbiosis,' envisioning a future where AI coexists with evolved cellular intelligences. Levin highlights breakthroughs in regenerative medicine, such as bioelectric treatments prompting leg growth in frogs. The conversation also touches on xenobots, their autonomous abilities, and ethical considerations, ultimately redefining our understanding of intelligence across living systems.
undefined
Jan 28, 2025 • 15min

Artificiality Keynote at the Imagining Summit 2024

Explore the historical significance of the umbrella as a metaphor for challenging societal norms and question the boundaries of life and intelligence. Delve into the shift from an attention-driven economy to one prioritizing personal intimacy, sparking conversations about data commoditization. Unpack the complex relationship between humans and devices like smartphones, examining their impact on knowledge and consciousness. Finally, reflect on the evolution of tools, highlighting the journey from bicycles to autonomous systems and envisioning a future of collaboration between humans and machines.
undefined
Jan 28, 2025 • 26min

DeepSeek: What Happened, What Matters, 
and Why It’s Interesting

First: - Apologies for the audio! We had a production error… What’s new: - DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient) Details - FP8 Training: Working With Less Precise Numbers - Traditional AI training requires extremely precise numbers - DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86) - Cut memory and computation needs significantly with minimal impact - Like teaching someone math using rounded numbers instead of carrying every decimal place - Learning from Other AIs (Distillation) - Traditional approach: AI learns everything from scratch by studying massive amounts of data - DeepSeek's approach: Use existing AI models as teachers - Like having experienced programmers mentor new developers: - Trial & Error Learning (for their R1 model) - Started with some basic "tutoring" from advanced models - Then let it practice solving problems on its own - When it found good solutions, these were fed back into training - Led to "Aha moments" where R1 discovered better ways to solve problems - Finally, polished its ability to explain its thinking clearly to humans - Smart Team Management (Mixture of Experts) - Instead of one massive system that does everything, built a team of specialists - Like running a software company with: - 256 specialists who focus on different areas - 1 generalist who helps with everything - Smart project manager who assigns work efficiently - For each task, only need 8 specialists plus the generalist - More efficient than having everyone work on everything - Efficient Memory Management (Multi-head Latent Attention) - Traditional AI is like keeping complete transcripts of every conversation - DeepSeek's approach is like taking smart meeting minutes - Captures key information in compressed format - Similar to how JPEG compresses images - Looking Ahead (Multi-Token Prediction) - Traditional AI reads one word at a time - DeepSeek looks ahead and predicts two words at once - Like a skilled reader who can read ahead while maintaining comprehension Why This Matters - Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants. - Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that’s the best conclusion at the moment). What’s Interesting - Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force. - Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own. - AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time. - IP for Free: If DeepSeek can be such a fast follower through distillation, what’s the advantage of OpenAI, Google, or another company to release a novel model?
undefined
Jan 25, 2025 • 45min

Hans Block & Moritz Riesewieck: Eternal You

We’re excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,’ about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals. We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them? Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care. Bravo. ------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music
undefined
Jan 25, 2025 • 31min

How AI Affects Critical Thinking and Cognitive Offloading

Briefing: How AI Affects Critical Thinking and Cognitive Offloading What This Paper Highlights - The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems. - Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving. - Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking. Why This Is Important - Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship. - Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made. - Education and Policy: There’s an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers. What’s Curious and Interesting - Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information. - AI’s Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints. - Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks. What This Tells Us About the Future - Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation. - Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI’s convenience, teaching people how to engage critically even when AI offers "easy answers." - Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlichhttps://www.mdpi.com/2075-4698/15/1/6
undefined
Jan 19, 2025 • 51min

J. Craig Wheeler: The Path to Singularity

We’re excited to welcome Craig Wheeler to the podcast. Craig is an astrophysicist and Professor at the University of Texas at Austin. Over his career, he has made significant contributions to our understanding of supernovae, black holes, and the nature of the universe itself. Craig’s new book, The Path to Singularity: How Technology Will Challenge the Future of Humanity, offers an exploration of how exponential technological change could upend life as we know it. Drawing on his background as an astrophysicist, Craig examines how humanity’s current trajectory is shaped by forces like AI, robotics, neuroscience, and space exploration—all of which are advancing at speeds that may outpace our ability to adapt. The book is an extension of a course Craig taught at UT Austin, where he challenged students to project humanity’s future over the next 100, 1,000, and even 100,000 years. His students explored ideas about AI, consciousness, and human evolution, ultimately shaping the themes that inspired the book. We found it fascinating, as he says in the interview, that the majority of the scenarios projected into the future were not positive for humanity. We wonder: Who wants to live in a dystopian future? And, for those of us who don’t: What can we do about it? This led to our interest in talking with Craig. We hope you enjoy our conversation with Craig Wheeler. --------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music
undefined
Jan 17, 2025 • 27min

AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025

Science Briefing: What AI Agents Tell Us About the Future of Human Experience * What These Papers Highlight - AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups. - One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren’t enough—we need a broader system to make them useful. * Why This Matters - Human Compatibility: Agents don’t just need to complete tasks—they need to work in ways that humans trust and find relatable. - New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences. - Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context. * What’s Interesting - Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It’s great at structure, bad at messiness. - Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator. - Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically. *What’s Next for Agents - Can agents learn to navigate our complexity, like social norms or context-sensitive decisions? - Will ecosystems with Sims and Assistants make AI feel more human—and less robotic? - How will trust and personalization shape whether people actually adopt these systems? Product Briefing: Always On AI Wearables * What’s new: - New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants. * Why This Matters - From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in. - Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording. - Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription. * What’s Interesting - Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking. - It’s About Apps: The app store is back as a business model. But are these startups ready for the challenge? - Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more. * The Human Experience - AI as a Second Self?: These devices don’t just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences? - Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go? - Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?
undefined
Dec 12, 2024 • 56min

Doyne Farmer: Making Sense of Chaos

We’re excited to welcome Doyne Farmer to the podcast. Doyne is a pioneering complexity scientist and a leading thinker on economic systems, technological change, and the future of society. Doyne is a Professor of Complex Systems at the University of Oxford, an external professor at the Santa Fe Institute, and Chief Scientist at Macrocosm. Doyne’s work spans an extraordinary range of topics, from agent-based modeling of financial markets to exploring how innovation shapes the long-term trajectory of human progress. At the heart of Doyne’s thinking is a focus on prediction—not in the narrow sense of forecasting next week’s market trends, but in understanding the deep, generative forces that shape the evolution of technology and society. His new book, Making Sense of Chaos: A Better Economics for a Better World, is a reflection on the limitations of traditional economics and a call to embrace the tools of complexity science. In it, Doyne argues that today’s economic models often fall short because they assume simplicity where there is none. What’s especially compelling about Doyne’s perspective is how he uses complexity science to challenge conventional economic assumptions. While traditional economics often treats markets as rational and efficient, Doyne reveals the messy, adaptive, and unpredictable nature of real-world economies. His ideas offer a powerful framework for rethinking how we approach systemic risk, innovation policy, and the role of AI-driven technologies in shaping our future. We believe Doyne’s ideas are essential for anyone trying to understand the uncertainties we face today. He doesn’t just highlight the complexity—he shows how to navigate it. By tracking the hidden currents that drive change, he helps us see the bigger picture of where we might be headed. We hope you enjoy our conversation with Doyne Farmer. ------------------------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music
undefined
Sep 28, 2024 • 58min

James Boyle: The Line—AI And the Future of Personhood

We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood. In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality. Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness. What's particularly compelling about Jamie’s approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights. We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now. To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." Let's dive into our conversation with Jamie Boyle. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music #artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
undefined
Sep 13, 2024 • 57min

Shannon Vallor: The AI Mirror

We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror. In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality. In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn’t represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we’ve already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence. We think this is one of the best books on AI for a general audience that has been published this year. Shannon’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us. Let’s dive into our conversation with Shannon Vallor. ----------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music #artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app