

Women talkin' 'bout AI
Kimberly Becker & Jessica Parker
We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines.
Episodes
Mentioned books

Dec 31, 2025 • 39min
Best of 2025: AI, Work, Resistance, and What We Learned
Best of 2025 brings together some of the most impactful conversations from this year on Women Talkin’ Bout AI.In this episode, we revisit our top 5 episodes of the year:Beyond Work: Post-Labor Economics with David Shapiro: A conversation about automation, empathy, and what remains uniquely human as AI reshapes work.Refusing the Drumbeat with Melanie Dusseau and Miriam Reynoldson: A discussion on resistance in higher education and their open letter refusing the push to adopt generative AI in the classroom.Once You See It, You Can’t Unsee It: The Enshittification of Tech Platforms: Jessica and Kimberly unpack enshittification and why so many tech platforms feel like they get worse over time.Maternal AI and the Myth of Women Saving Tech with Michelle Morkert: A critical examination of “maternal AI” and what gendered narratives reveal about power and responsibility in tech.Competing with Free: Why We Closed Moxie: A candid reflection on what it was like to build, and ultimately shut down, an AI startup in this moment.We’re heading into 2026 with some incredible guests and conversations we can’t wait to share.Thank you for listening, for thinking with us, and for staying curious alongside us.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 24, 2025 • 1h 21min
The Trojan Horse of AI
In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics. In this episode, we discuss:Why AI can function as a Trojan horse for data extraction and profitWhat data centers actually do, and why they matterThe environmental costs hidden inside “innovation” narrativesThe difference between individual AI use and industrial-scale impactWhy most data center activity isn’t actually AIHow communities are pitched data centers—and what’s often left outThe role of gender in ethical decision-making in techWhat AI is forcing educators to rethink about learning and workWhy asking “Who benefits?” still cuts through the hypeAnd how dissonance can be a form of clarityResources mentioned:IMPACT Risk framework: https://ai-impact-risk.comWhat Uses More: https://what-uses-more.comGuests:Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine. Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 17, 2025 • 46min
Easy for Humans, Hard for Machines: The Paradox Nobody Talks About
Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.Resources by TopicPrivacy & Security (ChatGPT)OpenAI Memory & Controls (Official Guide)OpenAI Data Controls & Privacy FAQOpenAI Blog: Using ChatGPT with AgentsMoravec's Paradox & Cognitive ScienceMoravec's Paradox (Wikipedia)"The Moravec Paradox" - Research PaperSycophancy & LLM Behavior"Sycophancy in Large Language Models: Causes and Mitigations" (arxiv)"Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality"Brain-Computer Interfaces & Embodied AINeuralink: "A Year of Telepathy" UpdateLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 10, 2025 • 38min
AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)
What happens when you automate away a six-hour task? You don't get more free time ... you just do more work. In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.WHAT WE COVER:What agentic AI actually is (and how it's different from ChatGPT)Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent taskThe framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)Why this beats creative AI work: no judgment calls, just executionThe Blackboard experiment: what happens when an agent does something you didn't ask it to doSecurity & trust: passwords, login credentials, and where your data actually goesEnterprise-level agent solutions (and why they're not quite ready yet)The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more outputHow detailed instruction manuals prepared Jessica for prompt engineeringThe human bottleneck: why your whole organization has to move at the same speedWhy marketing and research are next on the chopping blockTOOLS MENTIONED:ChatGPT Pro with Agents — https://openai.com/chatgpt/Perplexity Comet (agentic browser) — https://www.perplexity.ai/cometZoho Billing — https://www.zoho.com/billing/Constant Contact — https://www.constantcontact.comZapier — https://zapier.comElicit (systematic reviews & literature analysis) — https://elicit.comCorpus of Contemporary American English — https://www.english-corpora.org/coca/Descript — https://www.descript.comCanva — https://www.canva.comRiverside.fm — https://riverside.fmTIMESTAMPS:0:00 — Opening & guest cancellation1:18 — Podcast website & jingle development (and why music taste is complicated)6:34 — What is agentic AI? Jessica's invoice automation example10:33 — Why this use case actually works14:15 — The Blackboard incident (when the agent went off-script)16:21 — Security concerns: passwords, login credentials, and trust18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)19:27 — Enterprise solutions on the horizon20:57 — United Airlines cease-and-desist letters for replica training sites22:27 — Why Kimberly can't use agents in her CCRC work25:21 — How to identify your automatable workflows (the practical framework)27:57 — Research automation with Elicit & corpus linguistics30:45 — The core insight: AI shifts time, it doesn't save it34:10 — Organizational bottlenecks & human capacity limits35:08 — Pit & Peach (staying in your own canoe)Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 26, 2025 • 58min
Once You See It, You Can't Unsee It: The Enshitification of Tech Platforms
In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.Key TakeawaysEnshitification refers to the degradation of tech platforms over timeThe shift from individual users to business customers can lead to worse outcomes for end usersData privacy is a critical concern as companies monetize user interactionsAI is predicted to significantly displace workers in coming yearsRegulation is necessary to protect consumers from unchecked corporate powerMarket consolidation can stifle competition and innovationRecognizing these patterns is essential for navigating the tech landscapeFurther Reading & ResourcesCory Doctorow's Pluralistic blogThe Internet Con: How to Seize the Means of Computation2024 Tech Layoffs TrackerStreamlined "Top Links" Version (if you want minimal show notes):Cory Doctorow on EnshittificationEnshittification book"On the Dangers of Stochastic Parrots" by Bender & GebruAmazon/Diapers.com case studyGoogle Project Maven controversyAI job displacement tracker2024 Tech Layoffs TrackerLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 19, 2025 • 1h 1min
Maternal AI and the Myth of Women Saving Tech
In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com📚 Books & Scholarly Works MentionedGlobal Evidence on Gender Gapsand Generative AI: https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdfPink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652lScary Smart (Mo Gawdat – maternal AI concept) https://www.mogawdat.com/scary-smartLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 5, 2025 • 53min
The Containment Problem: Why AI and Synthetic Biology Can't Be Contained
In this episode, Jessica teaches Kimberly about the "containment problem," a concept that explores whether we can actually control advanced technologies like AI and synthetic biology. Inspired by Mustafa Suleyman's book The Coming Wave, Jessica and Kimberly discuss why containment might be impossible, the democratization of powerful technologies, and the surprising world of DIY genetic engineering (yes, you can buy a frog modification kit for your garage).What We Cover:What is the containment problem and why it mattersThe difference between AGI, ASI, and ACI Why AI is fundamentally different from nuclear weapons when it comes to containmentSynthetic biology: from AlphaFold to $1,099 frog gene editing kitsThe geopolitical arms race and why profit motives complicate containmentHow technology democratization gives individuals unprecedented powerWhether complete AI containment is even possible (spoiler: probably not)The modern Turing test and why perception might be realityBooks & Resources Mentioned:Empire of AI by Karen HaoDeepMind documentaryKey Themes:Technology inevitability vs. choiceThe challenges of regulating rapidly evolving technologiesWho benefits from AI advancement?The tension between innovation and safetyFollow Women Talking About AI for more conversations exploring the implications, opportunities, and challenges of artificial intelligence.Leave us a comment or a suggestion!Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Oct 18, 2025 • 1h 13min
Refusing the Drumbeat
On saying no to “inevitable” AI—and what we say yes to instead.Kimberly and Jessica recently sat down with Melanie Dusseau and Miriam Reynoldson for an episode of Women Talkin’ ’Bout AI. We were especially looking forward to this conversation because Melanie and Miriam are our first guests who openly identify as “AI Resisters.” The timing also felt right. Both Kimberly and I have been reexamining our own stance on AI in education—how it intersects with learning, writing, and creativity—and the more distance we’ve had from running a tech company, the more critical and curious we’ve become.This episode digs into big, thorny questions:What Melanie calls “the drumbeat of inevitability” that pressures educators to adopt AIMiriam’s post-digital view of what it means to live in a world completely entangled with technology; and our shared inquiry into who actually benefits when AI tools promise to make everything faster and more efficient. We also talk about data ethics, creative integrity, and the growing movement of educators saying no to automation—not out of fear, but out of care for human learning and connection.It’s a thoughtful, challenging, and hopeful conversation—and we hope you enjoy it as much as we did.About our guests: Melanie is an Associate Professor of English at the University of Findlay and a writer whose work spans poetry, plays, and fiction. Miriam is a Melbourne-based digital learning designer, educator, and PhD candidate at RMIT University whose research explores the value of learning in times of digital ubiquity.Melanie and Miriam are co-authors of the Open Letter from Educators Who Refuse the Call to Adopt GenAI in Education, which has collected over 1,000 signatures and was featured in an article by Forbes. Melanie is also the author of the essay Burn It Down, which advocates for AI resistance in the academy. We highly recommend reading both before diving into the episode. Melanie's personal website and University of Findlay profileMiriam’s personal website and blog "Care Doesn't Scale" Signs Preceding the End of the World by Yuri HerreraAsimov’s Science FictionUrsula K. Le Guin Ray BradburyLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Oct 11, 2025 • 50min
Hallucinations, Hype, and Hope: Rebecca Fordon on AI in Legal Research
In this episode of Women Talkin’ ’Bout AI, we sit down with Rebecca Fordon — law librarian, professor, and board member of the Free Law Project — to talk about how generative AI is transforming legal research, education, and the meaning of “expertise.”Rebecca helps us cut through the hype and ask harder questions: What problem are we really trying to solve with AI? Why are we using certain tools, and do we even know what data they’re built on?We talk about:🔹 How AI is reshaping the practice of legal research and what it means for the next generation of lawyers. 🔹 Why hallucinated case law and “certainty amplification” reveal deeper problems of trust and transparency. 🔹 The tension between speed and substance, and how “saving time” can actually shift where thinking happens. 🔹 The expert pipeline problem: what happens when AI replaces the messy, formative parts of learning? 🔹 How law librarians (and educators everywhere) are taking on the role of translators, bridging human judgment and machine outputs. 🔹 The open-access movement in law and how the Free Law Project is democratizing legal data.At its heart, this episode is about reclaiming curiosity, caution, and critical thinking in a field that depends on precision, and remembering that faster isn’t always smarter.Learn more: 🔗 Free Law Project: https://free.law 🔗 AI Law Librarians: https://ailawlibrarians.com🔗 Aaron Tay's musings about librarianship: https://musingsaboutlibrarianship.blogspot.com/🔗 Refusing GenAI in Writing Studies: A Quickstart Guide: https://refusinggenai.wordpress.com/Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Sep 2, 2025 • 51min
The Gender Gap in GenAI: Usage, Power, and Whose Voices Count
In this episode of Women Talkin’ ‘Bout AI, we start by discussing the findings of a 2024 study "Global Evidence on Gender Gaps and Generative AI" (🔗 below). One overall finding is that women are 20–25% less likely than men to use generative AI, which unspools into something bigger: a story about power, voice, and who gets to shape the future.We also discuss own experiences in tech, noticing how the gender gap in AI isn’t just about access to tools. It’s about what counts as legitimate work, whose voices are amplified, and how cultural scripts around “cheating,” confidence, and authority get absorbed into the most influential technologies of our time.We talk about:🔹 Why women’s hesitation around AI isn’t simply resistance, but often a reflection of ethics and identity.🔹 How underrepresentation today could mean future AI systems are trained on a distorted mirror of humanity.🔹 What it means to think of AI as both a child we’re raising and a cultural intermediary that’s already reshaping our sense of normal.🔹 the WEIRD AI Framework: WEIRD is a term from psychology that stands for Western, Educated, Industrialized, Rich, and Democratic. Most AI systems, generative models especially, are trained on corpora that overrepresent WEIRD voices and underrepresent everyone else.🔹 Practical ways women can experiment, reclaim, and band together in communities of practice.🔹 If AI is the new baseline for productivity and creativity, then the absence of women’s voices isn’t just a gap, it’s a risk of silence becoming the default.Learn more:🔗 Gender gap study: https://www.hbs.edu/faculty/Pages/item.aspx?num=66548🔗 Mo Gawdat's book Scary Smart: https://www.mogawdat.com/scary-smart🔗 Geoffrey Hinton Says AI Needs Maternal Instincts: https://www.forbes.com/sites/pialauritzen/2025/08/14/geoffrey-hinton-says-ai-needs-maternal-instincts-heres-what-it-takes/💙 Follow us on our Substack: Women Writin' 'Bout AI: https://substack.com/@womenwritinboutaiLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/


