

Mystery AI Hype Theater 3000
Emily M. Bender and Alex Hanna
Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
Episodes
Mentioned books

Nov 30, 2023 • 1h 4min
Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023
Sarah West and Andreas Liesenfeld join hosts Alex and Emily to discuss the true meaning of 'open source' in software companies. They explore the need for transparency in AI systems, challenges in finding and evaluating open source alternatives, limitations of AI capability indexes, the importance of regulating technology, and the debate on fair use of copyrighted material.

Nov 21, 2023 • 1h 5min
Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023
The hosts time travel back to the founding of artificial intelligence at Dartmouth College in 1956. They explore the grant proposal and debunk AI hype. They discuss machine learning, imagination, and the funding of self-driving cars. They also talk about understanding complex systems and biases in machine translation. Additionally, they touch on hate speech and the closure of an AI smoothie shop. Finally, they mention a failed AI-driven restaurant and a strange AI-developed Coke.

Nov 8, 2023 • 1h 1min
Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023
AI researchers Emma Strubell and Sasha Luccioni discuss the environmental impact of large language models, addressing carbon emissions, water and energy consumption. They emphasize the need for education, transparency, and awareness within the AI community. The podcast also covers AI's negative effects on dating apps, ethical concerns in relationship advice, debunking misconceptions about AI capabilities, and the potential negative impact of large language models in generating hateful content.

4 snips
Oct 31, 2023 • 1h
Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
The hosts debunk the claim that artificial general intelligence is already here. They discuss the flaws of advanced AI language models and their association with AI. The concept of zero-shot learning and its relation to general intelligence is explored. The controversy surrounding AI sentience and the dangers of AI systems are discussed. The potential of AI therapy in mental health and the impact of synthetic media on plagiarism are also explored.

Oct 4, 2023 • 1h 2min
Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp), September 22 2023
Stanford PhD student Haley Lepp joins Emily and Alex to discuss the hype around LLMs in education. They talk about reducing teacher workloads, increasing accessibility, and 'democratizing learning and knowing'. They also explore the devaluation of educator expertise and fatalism about LLMs in the classroom. Other topics include the University of Michigan's AI tools, blend of technical and socio-emotional skills, chat GPT in education, Microsoft's AI-generated article mishap, ethics of AI-generated content, and tech power in San Francisco.

Sep 28, 2023 • 1h 2min
Episode 16: Med-PaLM or Facepalm? A Second Opinion On LLMs In Healthcare (feat. Roxana Daneshjou), August 28, 2023
Guest Roxana Daneshjou, incoming assistant professor of dermatology and biomedical data science at Stanford, joins the hosts to discuss the use of large language models in healthcare. They evaluate the performance of these models, explore the challenges of evaluating them in clinical settings, and highlight the importance of multimodal processing in medicine. They also touch on the controversial use of AI in school libraries and transportation, and discuss the issue of fake books on Amazon.

Sep 20, 2023 • 1h 4min
Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023
Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.Plus a full portion of Fresh Hell...and a little bit of good news.References:White House press release on voluntary commitmentsEmily’s blog post critiquing the “voluntary commitments”An “AI safety” infused take on regulationAI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype“AI” Hurts Consumers and Workers — and Isn’t IntelligentFresh AI Hell:Future of Life Institute hijacks SEO for EU's AI ActLLMs for denying health insurance claimsNHS using “AI” as receptionistAutomated robots in receptionCan AI language models replace human research participants?A recipe chatbot taught users how to make chlorine gasUsing a chatbot to pretend to interview Harriet TubmanWorldcoin Orbs & iris scansMartin Shkreli’s AI for health start upAuthors impersonated with fraudulent books on Amazon/GoodreadsGood News:Zoom restores termsCheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Sep 13, 2023 • 1h 1min
Episode 14: Henry Kissinger, Machines of War, and the Age of Military AI Hype (feat. Lucy Suchman), July 21 2023
Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.This episode was recorded on July 21, 2023. Watch the video on PeerTube.References:Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)American Prospect: Meredith Whittaker & Lucy Suchman’s review of Kissinger et al’s bookVICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It Fresh AI Hell:American Psychological Association: how to cite ChatGPThttps://apastyle.apa.org/blog/how-to-cite-chatgptSpam reviews & children’s books:https://twitter.com/millbot/status/1671008061173952512?s=20An analysis we like, comparing AI to the fossil fuel industry:https://hachyderm.io/@dalias/110528154854288688AI Heaven from Dolly Parton:https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

6 snips
Sep 7, 2023 • 1h 1min
Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023
UC Berkeley scholar Hannah Zeavin discusses the National Eating Disorders Association's decision to replace their helpline with a chatbot, the history and significance of suicide hotlines, the importance of training for crisis support volunteers, workplace toxicity and the threat of job replacement, ethical concerns of sharing data with a for-profit, and the hype around AI services and concerns about relying on chatbots for financial advice.

Aug 29, 2023 • 1h 1min
Episode 12: It's All Hell, May 5 2023
In this episode, Alex and Emily discuss the benefits and risks of GPT-4 as an AI chatbot for medicine, highlight concerns about an AI therapy service, question the ethical use of AI to simulate conversations with deceased figures, and explore the use of chat GPT in courts. They also delve into the potential of mind-reading machines, discuss limitations of language models, and emphasize the importance of consent.