Mystery AI Hype Theater 3000 cover image

Mystery AI Hype Theater 3000

Latest episodes

undefined
Feb 29, 2024 • 1h 5min

Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.Watch the video of this episode on PeerTube.References:International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."Short versionLong version (pdf download)Fresh AI Hell:"I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."AI generated illustrations in a scientific paper -- rat balls edition.Per Retraction Watch: the paper with illustrations of a rat with enormous "testtomcels" has been retracted"[AbramovicCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
undefined
Feb 15, 2024 • 60min

Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024

Chris Gilliard, Tech Fellow, discusses the lack of student protections in AI-driven educational technologies at universities. Topics include the wave of universities adopting AI, limitations of Chat GPT, surveillance concerns, consequences of AI in higher education, privacy concerns in enterprise chatbots, impact of AI on journalism, and misconceptions about public statements and the retirement of a subway robot.
undefined
Feb 1, 2024 • 56min

Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024

The hosts debunk claims that GPTs can replace human workers and critique papers on GPTs as general purpose technologies. They express skepticism about the potential impact of AI on economic growth and discuss the correlation between AI mentions and corporate expenditure. The value of AI in performing administrative tasks is explored, with humorous examples. The use of AI in ebooks and translation services is discussed, including the negative impact of AI-generated voices on language learning platforms.
undefined
19 snips
Jan 17, 2024 • 1h

Episode 24: AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024

New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.Watch the video version of this episode on PeerTube.References:HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and InclusionAlgorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot projectWant to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference” (Drage & McInerney, 2022)Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)Fresh AI HellInternet of Shit 2.0: a "smart" bidetFake AI “students” enrolled at Michigan UniversitySynthetic images destroy online crochet groups“AI” for teacher performance feedbackPalette cleanser: “Stochastic parrot” is the American Dialect Society’s AI-related word of the year for 2023!Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
undefined
Jan 10, 2024 • 1h 5min

Episode 23: AI Hell Freezes Over, December 22 2023

Pentagon's movement towards allowing AI weapons to autonomously kill humans, conflict of interest and legal troubles for Tesla, the paradoxical nature of generative AI, concerns and opportunities in AI partnerships, AI-generated images, self-driving cars, mistreatment of workers, and testing language models, and using sequences of life events to predict human lives.
undefined
Jan 3, 2024 • 58min

Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023

Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.Justin Hendrix is editor of the Tech Policy Press.References:TPP tracker for the US Senate 'AI Insight Forum' hearingsBalancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)Hearing charterEmily's opening remarks at virtual roundtable on AISenate hearing addressing national security implications of AIVideo: Rep. Nancy Mace opens hearing with ChatGPT-generated statement. Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk PredictionTPP: Senate Homeland Security Committee Considers Philosophy of AIAlex & Emily's appearance on the Tech Policy Press PodcastFresh AI Hell:Asylum seekers vs AI-powered translation appsUK officials use AI to decide on issues from benefits to marriage licensesPrior guest Dr. Sarah Myers West testifying on AI concentrationCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
undefined
Nov 30, 2023 • 1h 4min

Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023

Sarah West and Andreas Liesenfeld join hosts Alex and Emily to discuss the true meaning of 'open source' in software companies. They explore the need for transparency in AI systems, challenges in finding and evaluating open source alternatives, limitations of AI capability indexes, the importance of regulating technology, and the debate on fair use of copyrighted material.
undefined
Nov 21, 2023 • 1h 5min

Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023

The hosts time travel back to the founding of artificial intelligence at Dartmouth College in 1956. They explore the grant proposal and debunk AI hype. They discuss machine learning, imagination, and the funding of self-driving cars. They also talk about understanding complex systems and biases in machine translation. Additionally, they touch on hate speech and the closure of an AI smoothie shop. Finally, they mention a failed AI-driven restaurant and a strange AI-developed Coke.
undefined
Nov 8, 2023 • 1h 1min

Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

AI researchers Emma Strubell and Sasha Luccioni discuss the environmental impact of large language models, addressing carbon emissions, water and energy consumption. They emphasize the need for education, transparency, and awareness within the AI community. The podcast also covers AI's negative effects on dating apps, ethical concerns in relationship advice, debunking misconceptions about AI capabilities, and the potential negative impact of large language models in generating hateful content.
undefined
4 snips
Oct 31, 2023 • 1h

Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023

The hosts debunk the claim that artificial general intelligence is already here. They discuss the flaws of advanced AI language models and their association with AI. The concept of zero-shot learning and its relation to general intelligence is explored. The controversy surrounding AI sentience and the dangers of AI systems are discussed. The potential of AI therapy in mental health and the impact of synthetic media on plagiarism are also explored.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode