
Mystery AI Hype Theater 3000
Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
Latest episodes

19 snips
Jan 17, 2024 • 1h
Episode 24: AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.Watch the video version of this episode on PeerTube.References:HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and InclusionAlgorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot projectWant to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference” (Drage & McInerney, 2022)Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)Fresh AI HellInternet of Shit 2.0: a "smart" bidetFake AI “students” enrolled at Michigan UniversitySynthetic images destroy online crochet groups“AI” for teacher performance feedbackPalette cleanser: “Stochastic parrot” is the American Dialect Society’s AI-related word of the year for 2023!Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Jan 10, 2024 • 1h 5min
Episode 23: AI Hell Freezes Over, December 22 2023
Pentagon's movement towards allowing AI weapons to autonomously kill humans, conflict of interest and legal troubles for Tesla, the paradoxical nature of generative AI, concerns and opportunities in AI partnerships, AI-generated images, self-driving cars, mistreatment of workers, and testing language models, and using sequences of life events to predict human lives.

Jan 3, 2024 • 58min
Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023
Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.Justin Hendrix is editor of the Tech Policy Press.References:TPP tracker for the US Senate 'AI Insight Forum' hearingsBalancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)Hearing charterEmily's opening remarks at virtual roundtable on AISenate hearing addressing national security implications of AIVideo: Rep. Nancy Mace opens hearing with ChatGPT-generated statement. Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk PredictionTPP: Senate Homeland Security Committee Considers Philosophy of AIAlex & Emily's appearance on the Tech Policy Press PodcastFresh AI Hell:Asylum seekers vs AI-powered translation appsUK officials use AI to decide on issues from benefits to marriage licensesPrior guest Dr. Sarah Myers West testifying on AI concentrationCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Nov 30, 2023 • 1h 4min
Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023
Sarah West and Andreas Liesenfeld join hosts Alex and Emily to discuss the true meaning of 'open source' in software companies. They explore the need for transparency in AI systems, challenges in finding and evaluating open source alternatives, limitations of AI capability indexes, the importance of regulating technology, and the debate on fair use of copyrighted material.

Nov 21, 2023 • 1h 5min
Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023
The hosts time travel back to the founding of artificial intelligence at Dartmouth College in 1956. They explore the grant proposal and debunk AI hype. They discuss machine learning, imagination, and the funding of self-driving cars. They also talk about understanding complex systems and biases in machine translation. Additionally, they touch on hate speech and the closure of an AI smoothie shop. Finally, they mention a failed AI-driven restaurant and a strange AI-developed Coke.

Nov 8, 2023 • 1h 1min
Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023
AI researchers Emma Strubell and Sasha Luccioni discuss the environmental impact of large language models, addressing carbon emissions, water and energy consumption. They emphasize the need for education, transparency, and awareness within the AI community. The podcast also covers AI's negative effects on dating apps, ethical concerns in relationship advice, debunking misconceptions about AI capabilities, and the potential negative impact of large language models in generating hateful content.

4 snips
Oct 31, 2023 • 1h
Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
The hosts debunk the claim that artificial general intelligence is already here. They discuss the flaws of advanced AI language models and their association with AI. The concept of zero-shot learning and its relation to general intelligence is explored. The controversy surrounding AI sentience and the dangers of AI systems are discussed. The potential of AI therapy in mental health and the impact of synthetic media on plagiarism are also explored.

Oct 4, 2023 • 1h 2min
Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp), September 22 2023
Stanford PhD student Haley Lepp joins Emily and Alex to discuss the hype around LLMs in education. They talk about reducing teacher workloads, increasing accessibility, and 'democratizing learning and knowing'. They also explore the devaluation of educator expertise and fatalism about LLMs in the classroom. Other topics include the University of Michigan's AI tools, blend of technical and socio-emotional skills, chat GPT in education, Microsoft's AI-generated article mishap, ethics of AI-generated content, and tech power in San Francisco.

Sep 28, 2023 • 1h 2min
Episode 16: Med-PaLM or Facepalm? A Second Opinion On LLMs In Healthcare (feat. Roxana Daneshjou), August 28, 2023
Guest Roxana Daneshjou, incoming assistant professor of dermatology and biomedical data science at Stanford, joins the hosts to discuss the use of large language models in healthcare. They evaluate the performance of these models, explore the challenges of evaluating them in clinical settings, and highlight the importance of multimodal processing in medicine. They also touch on the controversial use of AI in school libraries and transportation, and discuss the issue of fake books on Amazon.

Sep 20, 2023 • 1h 4min
Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023
Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.Plus a full portion of Fresh Hell...and a little bit of good news.References:White House press release on voluntary commitmentsEmily’s blog post critiquing the “voluntary commitments”An “AI safety” infused take on regulationAI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype“AI” Hurts Consumers and Workers — and Isn’t IntelligentFresh AI Hell:Future of Life Institute hijacks SEO for EU's AI ActLLMs for denying health insurance claimsNHS using “AI” as receptionistAutomated robots in receptionCan AI language models replace human research participants?A recipe chatbot taught users how to make chlorine gasUsing a chatbot to pretend to interview Harriet TubmanWorldcoin Orbs & iris scansMartin Shkreli’s AI for health start upAuthors impersonated with fraudulent books on Amazon/GoodreadsGood News:Zoom restCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.