undefined

Emily M. Bender

Professor of linguistics at the University of Washington. Co-host of Mystery AI Hype Theater 3000.

Top 5 podcasts with Emily M. Bender

Ranked by the Snipd community
undefined
104 snips
Apr 13, 2023 • 1h 4min

ChatGPT Is Not Intelligent w/ Emily M. Bender

Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology. Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master’s Program. She’s also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon. The podcast is produced by Eric Wickham and part of the Harbinger Media Network.  Also mentioned in this episode:Emily was one of the co-authors on the “On the Dangers of Stochastic Parrots” paper and co-wrote the “Octopus Paper” with Alexander Koller. She was also recently profiled in New York Magazine and has written about why policymakers shouldn’t fall for the AI hype.The Future of Life Institute put out the “Pause Giant AI Experiments” letter and the authors of the “Stochastic Parrots” paper responded through DAIR Institute.Zachary Loeb has written about Joseph Weizenbaum and the ELIZA chatbot.Leslie Kay Jones has researched how Black women use and experience social media.As generative AI is rolled out, many tech companies are firing their AI ethics teams.Emily points to Algorithmic Justice League and AI Incident Database.Deborah Raji wrote about data and systemic racism for MIT Tech Review.Books mentioned: Weapons of Math Destruction by Cathy O'Neil, Algorithms of Oppression by Safiya Noble, The Age of Surveillance Capitalism by Shoshana Zuboff, Race After Technology by Ruha Benjamin, Ghost Work by Mary L Gray & Siddharth Suri, Artificial Unintelligence by Meredith Broussard, Design Justice by Sasha Costanza-Chock, Data Conscience: Algorithmic S1ege on our Hum4n1ty by Brandeis Marshall.Support the show
undefined
17 snips
May 10, 2023 • 44min

The Great A.I. Hallucination

Tech futurists have been saying for decades that artificial intelligence will transform the way we live. In some ways, it already has: Think autocorrect, Siri, facial recognition. But ChatGPT and other generative A.I. models are also prone to getting things wrong—and whether the programs will improve with time is not altogether clear. So what purpose, exactly, does this iteration of A.I. actually serve, how is it likely to be adopted, and who stands to benefit (or suffer) from it? On episode 67 of The Politics of Everything, hosts Laura Marsh and Alex Pareene talk with Washington Post reporter Will Oremus about a troubling tale of A.I. fabulism; with science fiction author Ted Chiang about ramifications of an A.I-polluted internet; and with linguist Emily M. Bender about what large-language models can and cannot do—and whether we’re asking the right questions about this technology. Learn more about your ad choices. Visit megaphone.fm/adchoices
undefined
9 snips
May 4, 2023 • 54min

How worried—or excited—should we be about AI?

AI is amazing… or terrifying, depending on who you ask. This is a technology that elicits strong, almost existential reactions. So in the final episode of our special series about AI, we dig into the giant ambitions and enormous concerns people have about the very same tech.Featuring: New York Times tech columnist Kevin Roose (@kevinroose), who tells me why his viral conversation with Bing’s AI chatbot changed the way he thought about the new tech.Then: Google has everything to lose here, so I speak with James Manyika, Google’s Senior Vice President of Technology and Society, about the company’s ambitions for AI. [9:23]Plus: I talk to Professor Emily M. Bender (@emilymbender), one of the people behind a now-famous paper on AI’s limits. Her “stochastic parrot” seems to have hit a nerve with some of AI’s biggest proponents. So maybe she’s onto something. [29:30] Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
8 snips
Jun 29, 2023 • 28min

AI and human extinction

In the headlines this week eminent tech experts and public figures signed an open letter that read “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”One of the signatories was Geoffrey Hinton, the so-called ‘godfather of AI’. He’s become so concerned about the risks associated with artificial intelligence that he recently decided to quit his job at Google, where he had worked for more than a decade. But are these concerns justified, or is it overblown scaremongering? And should we start prepping for a Terminator-style takeover? To get the answers, presenter Gareth Mitchell is joined by computational linguist Prof Emily M. Bender from the University of Washington along with Dr Stephen Cave, Director at the Leverhulme Centre for the Future of Intelligence (CFI). Next up, we hear from Prof Carl Sayer at UCL, along with Dr Cicely Marshall and Dr Matthew Wilkinson from the University of Cambridge, to dig into the science behind wildflower meadows and whether they can boost biodiversity and even help ease climate change. Finally, have you heard about Balto the sled dog? He was part of a life-saving mission in the 1920s and now he has the chance to be a hero once more. His DNA has been studied by the Zoonomia project, which is using databases of genomes from hundreds of mammals to build a better picture of evolution. This data could then be used help identify those animals that are at the greatest risk of extinction. Presenter: Gareth Mitchell Producer: Harrison Lewis Content Producers: Ella Hubber and Alice Lipscombe-Southwell Editor: Richard Collings
undefined
6 snips
Sep 9, 2021 • 1h 13min

Emily M. Bender — Language Models and Linguistics

In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender---Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.---Timestamps:0:00 Sneak peek, intro1:03 Stochastic Parrots9:57 The societal impact of big language models16:49 How language models can be harmful26:00 The important difference between linguistic form and meaning34:40 The octopus thought experiment42:11 Language acquisition and the future of language models49:47 Why benchmarks are limited54:38 Ways of complementing benchmarks1:01:20 The #BenderRule1:03:50 Language diversity and linguistics1:12:49 Outro