
Artificiality: Minds Meeting Machines
Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.
Latest episodes

Jan 25, 2025 • 31min
How AI Affects Critical Thinking and Cognitive Offloading
Briefing: How AI Affects Critical Thinking and Cognitive Offloading
What This Paper Highlights
- The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems.
- Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving.
- Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking.
Why This Is Important
- Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship.
- Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made.
- Education and Policy: There’s an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers.
What’s Curious and Interesting
- Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information.
- AI’s Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints.
- Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks.
What This Tells Us About the Future
- Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation.
- Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI’s convenience, teaching people how to engage critically even when AI offers "easy answers."
- Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning.
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlichhttps://www.mdpi.com/2075-4698/15/1/6

Jan 19, 2025 • 51min
J. Craig Wheeler: The Path to Singularity
We’re excited to welcome Craig Wheeler to the podcast. Craig is an astrophysicist and Professor at the University of Texas at Austin. Over his career, he has made significant contributions to our understanding of supernovae, black holes, and the nature of the universe itself.
Craig’s new book, The Path to Singularity: How Technology Will Challenge the Future of Humanity, offers an exploration of how exponential technological change could upend life as we know it. Drawing on his background as an astrophysicist, Craig examines how humanity’s current trajectory is shaped by forces like AI, robotics, neuroscience, and space exploration—all of which are advancing at speeds that may outpace our ability to adapt.
The book is an extension of a course Craig taught at UT Austin, where he challenged students to project humanity’s future over the next 100, 1,000, and even 100,000 years. His students explored ideas about AI, consciousness, and human evolution, ultimately shaping the themes that inspired the book. We found it fascinating, as he says in the interview, that the majority of the scenarios projected into the future were not positive for humanity.
We wonder: Who wants to live in a dystopian future? And, for those of us who don’t: What can we do about it? This led to our interest in talking with Craig.
We hope you enjoy our conversation with Craig Wheeler.
---------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music

Jan 17, 2025 • 27min
AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025
Science Briefing: What AI Agents Tell Us About the Future of Human Experience
* What These Papers Highlight
- AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups.
- One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren’t enough—we need a broader system to make them useful.
* Why This Matters
- Human Compatibility: Agents don’t just need to complete tasks—they need to work in ways that humans trust and find relatable.
- New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences.
- Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context.
* What’s Interesting
- Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It’s great at structure, bad at messiness.
- Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator.
- Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically.
*What’s Next for Agents
- Can agents learn to navigate our complexity, like social norms or context-sensitive decisions?
- Will ecosystems with Sims and Assistants make AI feel more human—and less robotic?
- How will trust and personalization shape whether people actually adopt these systems?
Product Briefing: Always On AI Wearables
* What’s new:
- New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants.
* Why This Matters
- From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in.
- Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording.
- Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription.
* What’s Interesting
- Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking.
- It’s About Apps: The app store is back as a business model. But are these startups ready for the challenge?
- Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more.
* The Human Experience
- AI as a Second Self?: These devices don’t just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences?
- Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go?
- Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?

Dec 12, 2024 • 56min
Doyne Farmer: Making Sense of Chaos
We’re excited to welcome Doyne Farmer to the podcast. Doyne is a pioneering complexity scientist and a leading thinker on economic systems, technological change, and the future of society. Doyne is a Professor of Complex Systems at the University of Oxford, an external professor at the Santa Fe Institute, and Chief Scientist at Macrocosm.
Doyne’s work spans an extraordinary range of topics, from agent-based modeling of financial markets to exploring how innovation shapes the long-term trajectory of human progress. At the heart of Doyne’s thinking is a focus on prediction—not in the narrow sense of forecasting next week’s market trends, but in understanding the deep, generative forces that shape the evolution of technology and society.
His new book, Making Sense of Chaos: A Better Economics for a Better World, is a reflection on the limitations of traditional economics and a call to embrace the tools of complexity science. In it, Doyne argues that today’s economic models often fall short because they assume simplicity where there is none. What’s especially compelling about Doyne’s perspective is how he uses complexity science to challenge conventional economic assumptions. While traditional economics often treats markets as rational and efficient, Doyne reveals the messy, adaptive, and unpredictable nature of real-world economies. His ideas offer a powerful framework for rethinking how we approach systemic risk, innovation policy, and the role of AI-driven technologies in shaping our future.
We believe Doyne’s ideas are essential for anyone trying to understand the uncertainties we face today. He doesn’t just highlight the complexity—he shows how to navigate it. By tracking the hidden currents that drive change, he helps us see the bigger picture of where we might be headed.
We hope you enjoy our conversation with Doyne Farmer.
------------------------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music

Sep 28, 2024 • 58min
James Boyle: The Line—AI And the Future of Personhood
We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood.
In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality.
Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness.
What's particularly compelling about Jamie’s approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights.
We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now.
To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe."
Let's dive into our conversation with Jamie Boyle.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

Sep 13, 2024 • 57min
Shannon Vallor: The AI Mirror
We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror.
In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality.
In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn’t represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we’ve already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence.
We think this is one of the best books on AI for a general audience that has been published this year. Shannon’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us.
Let’s dive into our conversation with Shannon Vallor.
-----------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

Aug 30, 2024 • 56min
Matt Beane: The Skill Code
We're excited to welcome to the podcast Matt Beane, Assistant Professor at UC Santa Barbara and the author of the book "The Skill Code: How to Save Human Ability in an Age of Intelligent Machines."
Matt’s research investigates how AI is changing the traditional apprenticeship model, creating a tension between short-term performance gains and long-term skill development. His work has particularly focused on the relationship between junior and senior surgeons in the operating theater. As he told us, "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship." He observed that junior surgeons now often just set up the robot and watch the senior surgeon operate for hours, epitomizing a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices.
In "The Skill Code," Matt argues that three key elements are essential for developing expertise: challenge, complexity, and connection. He points out that real learning often involves discomfort, saying, "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?"
Matt's research shows that while AI can significantly boost productivity, it may be undermining critical aspects of skill development. He warns that the traditional model of "See one, do one, teach one" is becoming "See one, and if-you're-lucky do one, and not-on-your-life teach one." In our conversation, we explore these insights and discuss how we might preserve human ability in an age of intelligent machines.
Let’s dive into our conversation with Matt Beane on the future of human skill in an AI-driven world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

Aug 2, 2024 • 57min
Emily M. Bender: AI, Linguistics, Parrots, and more!
We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington.
As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily’s expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that’s why we were eager to talk with her.
Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development.
In this conversation, we explore the issues of current AI systems with a particular focus on Emily’s view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines.
Let's dive into our conversation with Emily Bender.
----------------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

Jul 13, 2024 • 1h 2min
John Havens: Heartificial Intelligence
We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing.
In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values" - our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us.
Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems.
In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. We explore questions like:
What are the core components of human values that AI systems need to understand?
How can we design AI systems to augment rather than replace human decision-making?
Why has the field of AI ethics lagged behind technological development, and what role can positive psychology play in bridging this gap?
Should we be concerned about AI systems usurping our ability to define our own values, or are there inherent limits to what machines can understand about human ethics?
Let's dive into our conversation with John Havens.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

Jun 22, 2024 • 57min
Leslie Valiant: Educability
We’re excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn.
Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge.
Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts.
In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like:
What are the core components of educability that make human intelligence special?
How can we design AI systems to augment rather than replace human learning?
Why has the science of education lagged behind other fields, and what role can AI play in accelerating pedagogical research and practice?
Should we be concerned about a potential "intelligence explosion" as machines grow more sophisticated, or are there limits to the power of AI?
Let’s dive into our conversation with Leslie Valiant.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.