

Women talkin' 'bout AI
Kimberly Becker & Jessica Parker
We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines.
Episodes
Mentioned books

Jan 21, 2026 • 57min
Vibe Coding and Building AI for Kids: Inside Tobey's Tutor with Arlyn Garijan
In this episode of Women Talkin’ ’Bout AI, Jessica sits down with Arlyn Gajilan, founder of Tobey’s Tutor, an AI-powered learning support platform she originally built for her son, who has ADHD and dyslexia.This conversation is a deep dive into what it actually looks like to build an AI product as a non-technical, bootstrapped founder, from vibe coding and early prototypes to onboarding, safety systems, and pricing decisions.Jessica fully geeks out with Arlyn as they unpack:Building AI to solve a deeply personal problemWhat “vibe coding” can (and can’t) doDesigning responsibly for children and learning differencesUX vs. UI decisions that matterBootstrapping, pricing, and intentionally staying smallWhy “AI wrapper” criticism misses the pointThe reality of building while parenting and working full-timeMentioned in the EpisodeTobey’s Tutor: https://tobeystutor.com/Scientific American (article mentioning Toby’s Tutor): https://www.scientificamerican.com/article/how-one-mom-used-vibe-coding-to-build-an-ai-tutor-for-her-dyslexic-son/Mobbin (UX/UI inspiration library); https://mobbin.com/Empire of AI by Karen Hao: https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Jan 14, 2026 • 1h 10min
When Everyone Uses AI, What’s Real Anymore?
As AI shows up everywhere, something shifts, and it becomes harder to tell what’s human and what’s generated.In this episode, Jessica and Kimberly unpack how AI-driven convenience is reshaping education, relationships, identity, and even big systems (like markets and healthcare). They explore signaling, semiotics, and why “perfect” content can feel thin or unreal, and end with small ways to choose more human signals in a noisy world.Bonus: If you want to see how this episode ended, tune in on YouTube for a few unfiltered bloopers at the end: https://www.youtube.com/@womentalkinboutaiTopics we cover in this episode:AI as an invisible intermediaryFinding the signal in the noiseHigher ed reality checkWhy AI feels “safer” than peopleSemiotics The “uncanny valley” of social mediaAI for therapy + parenting supportCultural swing backNot-a-Sponsor Bloopers (YouTube only): Stick around on YouTube for our end-of-episode bloopers, featuring our favorite products that are definitely not sponsoring this show (yet). https://www.youtube.com/@womentalkinboutaiLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Jan 7, 2026 • 1h 3min
Rest, Resistance, and the Protestant Work Ethic (in the Age of AI)
We’re kicking off 2026 with our most personal episode yet.This conversation wasn’t planned. We sat down intending to talk about what comes next for the show, and instead found ourselves in a deeper discussion about work, burnout, ambition, and what it means to live in a moment where AI is rapidly reshaping labor, identity, and trust.In this episode:Why “work is sacred” feels harder to believe and harder to let go ofBurnout, hustle culture, and the cognitive dissonance of automationLabor zero, post-labor economics, and the fear beneath productivityStatus, money, degrees, and inherited stories about worthRest as resistance and nervous system regulationAI, trust erosion, and the danger of slow confusionDopamine, addiction, and withdrawal at a societal scaleWhy connection may be the real antidoteSources:David Shapiro's Substack on Labor Zero: https://daveshap.substack.com/p/im-starting-a-movementHe, She, and It by Marge Piercy: https://en.wikipedia.org/wiki/He,_She_and_ItEthan Mollick's Substack on the temptation of The Button: https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptationRest Is Resistance by Tricia Hersey: https://blackgarnetbooks.com/item/oR7uwsLR1Xu2xerrvdfsqAThe Last Invention (AI Podcast): https://podcasts.apple.com/us/podcast/the-last-invention/id1839942885Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 31, 2025 • 39min
Best of 2025: AI, Work, Resistance, and What We Learned
Best of 2025 brings together some of the most impactful conversations from this year on Women Talkin’ Bout AI.In this episode, we revisit our top 5 episodes of the year:Beyond Work: Post-Labor Economics with David Shapiro: A conversation about automation, empathy, and what remains uniquely human as AI reshapes work.Refusing the Drumbeat with Melanie Dusseau and Miriam Reynoldson: A discussion on resistance in higher education and their open letter refusing the push to adopt generative AI in the classroom.Once You See It, You Can’t Unsee It: The Enshittification of Tech Platforms: Jessica and Kimberly unpack enshittification and why so many tech platforms feel like they get worse over time.Maternal AI and the Myth of Women Saving Tech with Michelle Morkert: A critical examination of “maternal AI” and what gendered narratives reveal about power and responsibility in tech.Competing with Free: Why We Closed Moxie: A candid reflection on what it was like to build, and ultimately shut down, an AI startup in this moment.We’re heading into 2026 with some incredible guests and conversations we can’t wait to share.Thank you for listening, for thinking with us, and for staying curious alongside us.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 24, 2025 • 1h 21min
The Trojan Horse of AI
In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics. In this episode, we discuss:Why AI can function as a Trojan horse for data extraction and profitWhat data centers actually do, and why they matterThe environmental costs hidden inside “innovation” narrativesThe difference between individual AI use and industrial-scale impactWhy most data center activity isn’t actually AIHow communities are pitched data centers—and what’s often left outThe role of gender in ethical decision-making in techWhat AI is forcing educators to rethink about learning and workWhy asking “Who benefits?” still cuts through the hypeAnd how dissonance can be a form of clarityResources mentioned:IMPACT Risk framework: https://ai-impact-risk.comWhat Uses More: https://what-uses-more.comGuests:Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine. Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 17, 2025 • 46min
Easy for Humans, Hard for Machines: The Paradox Nobody Talks About
Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.Resources by TopicPrivacy & Security (ChatGPT)OpenAI Memory & Controls (Official Guide)OpenAI Data Controls & Privacy FAQOpenAI Blog: Using ChatGPT with AgentsMoravec's Paradox & Cognitive ScienceMoravec's Paradox (Wikipedia)"The Moravec Paradox" - Research PaperSycophancy & LLM Behavior"Sycophancy in Large Language Models: Causes and Mitigations" (arxiv)"Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality"Brain-Computer Interfaces & Embodied AINeuralink: "A Year of Telepathy" UpdateLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Dec 10, 2025 • 38min
AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)
What happens when you automate away a six-hour task? You don't get more free time ... you just do more work. In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.WHAT WE COVER:What agentic AI actually is (and how it's different from ChatGPT)Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent taskThe framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)Why this beats creative AI work: no judgment calls, just executionThe Blackboard experiment: what happens when an agent does something you didn't ask it to doSecurity & trust: passwords, login credentials, and where your data actually goesEnterprise-level agent solutions (and why they're not quite ready yet)The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more outputHow detailed instruction manuals prepared Jessica for prompt engineeringThe human bottleneck: why your whole organization has to move at the same speedWhy marketing and research are next on the chopping blockTOOLS MENTIONED:ChatGPT Pro with Agents — https://openai.com/chatgpt/Perplexity Comet (agentic browser) — https://www.perplexity.ai/cometZoho Billing — https://www.zoho.com/billing/Constant Contact — https://www.constantcontact.comZapier — https://zapier.comElicit (systematic reviews & literature analysis) — https://elicit.comCorpus of Contemporary American English — https://www.english-corpora.org/coca/Descript — https://www.descript.comCanva — https://www.canva.comRiverside.fm — https://riverside.fmTIMESTAMPS:0:00 — Opening & guest cancellation1:18 — Podcast website & jingle development (and why music taste is complicated)6:34 — What is agentic AI? Jessica's invoice automation example10:33 — Why this use case actually works14:15 — The Blackboard incident (when the agent went off-script)16:21 — Security concerns: passwords, login credentials, and trust18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)19:27 — Enterprise solutions on the horizon20:57 — United Airlines cease-and-desist letters for replica training sites22:27 — Why Kimberly can't use agents in her CCRC work25:21 — How to identify your automatable workflows (the practical framework)27:57 — Research automation with Elicit & corpus linguistics30:45 — The core insight: AI shifts time, it doesn't save it34:10 — Organizational bottlenecks & human capacity limits35:08 — Pit & Peach (staying in your own canoe)Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 26, 2025 • 58min
Once You See It, You Can't Unsee It: The Enshitification of Tech Platforms
In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.Key TakeawaysEnshitification refers to the degradation of tech platforms over timeThe shift from individual users to business customers can lead to worse outcomes for end usersData privacy is a critical concern as companies monetize user interactionsAI is predicted to significantly displace workers in coming yearsRegulation is necessary to protect consumers from unchecked corporate powerMarket consolidation can stifle competition and innovationRecognizing these patterns is essential for navigating the tech landscapeFurther Reading & ResourcesCory Doctorow's Pluralistic blogThe Internet Con: How to Seize the Means of Computation2024 Tech Layoffs TrackerStreamlined "Top Links" Version (if you want minimal show notes):Cory Doctorow on EnshittificationEnshittification book"On the Dangers of Stochastic Parrots" by Bender & GebruAmazon/Diapers.com case studyGoogle Project Maven controversyAI job displacement tracker2024 Tech Layoffs TrackerLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 19, 2025 • 1h 1min
Maternal AI and the Myth of Women Saving Tech
In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com📚 Books & Scholarly Works MentionedGlobal Evidence on Gender Gapsand Generative AI: https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdfPink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652lScary Smart (Mo Gawdat – maternal AI concept) https://www.mogawdat.com/scary-smartLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/

Nov 5, 2025 • 53min
The Containment Problem: Why AI and Synthetic Biology Can't Be Contained
In this episode, Jessica teaches Kimberly about the "containment problem," a concept that explores whether we can actually control advanced technologies like AI and synthetic biology. Inspired by Mustafa Suleyman's book The Coming Wave, Jessica and Kimberly discuss why containment might be impossible, the democratization of powerful technologies, and the surprising world of DIY genetic engineering (yes, you can buy a frog modification kit for your garage).What We Cover:What is the containment problem and why it mattersThe difference between AGI, ASI, and ACI Why AI is fundamentally different from nuclear weapons when it comes to containmentSynthetic biology: from AlphaFold to $1,099 frog gene editing kitsThe geopolitical arms race and why profit motives complicate containmentHow technology democratization gives individuals unprecedented powerWhether complete AI containment is even possible (spoiler: probably not)The modern Turing test and why perception might be realityBooks & Resources Mentioned:Empire of AI by Karen HaoDeepMind documentaryKey Themes:Technology inevitability vs. choiceThe challenges of regulating rapidly evolving technologiesWho benefits from AI advancement?The tension between innovation and safetyFollow Women Talking About AI for more conversations exploring the implications, opportunities, and challenges of artificial intelligence.Leave us a comment or a suggestion!Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/


