undefined

Timnit Gebru

Postdoctoral researcher at Microsoft Research and Stanford AI Lab PhD graduate. Her research uses AI and computer vision to analyze societal trends.

Top 10 podcasts with Timnit Gebru

Ranked by the Snipd community
undefined
29 snips
Dec 20, 2017 • 31min

Ep. 44: Forget Polls, Here's What Street View, and AI, Can Tell You About How People Will Vote

Timnit Gebru, a postdoctoral researcher at Microsoft and a Stanford AI Lab PhD graduate, dives into the intriguing intersection of AI and voting behaviors. She discusses how Google Street View data can predict demographic trends by analyzing vehicle types in neighborhoods. With a focus on the challenges of fine-grained image recognition and the ethics of AI, she emphasizes the need for fairness and accountability in algorithms. Gebru also shares fascinating insights from her research, underscoring the biases that can influence societal outcomes.
undefined
28 snips
Jan 19, 2023 • 1h 4min

Don’t Fall for the AI Hype w/ Timnit Gebru

Paris Marx is joined by Timnit Gebru to discuss the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute and former co-lead of the Ethical AI research team at Google. You can follow her on Twitter at @timnitGebru.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Please participate in our listener survey this month to give us a better idea of what you think of the show: https://forms.gle/xayiT7DQJn56p62x7Timnit wrote about the exploited labor behind AI tools and how effective altruism is pushing a harmful idea of AI ethics.Karen Hao broke down the details of the paper that got Timnit fired from Google.Emily Tucker wrote an article called “Artifice and Intelligence.”In 2016, ProPublica published an article about technology being used to “predict” future criminals that was biased against black people.In 2015, Google Photos classified black women as “gorillas.” In 2018, it still hadn’t really been fixed.Artists have been protesting AI-generated images that train themselves on their work and threaten their livelihoods.OpenAI used Kenyan workers paid less than $2 an hour to try to make ChatGPT less toxic.Zachary Loeb described ELIZA in his article about Joseph Weizenbaum’s work and legacy.Support the show
undefined
16 snips
Apr 18, 2022 • 52min

Daring to DAIR: Distributed AI Research with Timnit Gebru - #568

Today we’re joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being “resignated” from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the “distributed” nature of the institute, how they’re going about figuring out what is in scope and out of scope for the institute’s research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more!The complete show notes for this episode can be found at twimlai.com/go/568
undefined
11 snips
Jan 9, 2023 • 48min

Is ethical AI possible?

Sean Illing talks with Timnit Gebru, the founder of the Distributed AI Research Institute. She studies the ethics of artificial intelligence and is an outspoken critic of companies developing new AI systems. Sean and Timnit discuss the power dynamics in the world of AI, the discriminatory outcomes that these technologies can cause, and the need for accountability and transparency in the field.Host: Sean Illing (@seanilling), host, The Gray AreaGuest: Timnit Gebru (@timnitGebru), founder, Distributed AI Research InstituteReferences:  “The Exploited Labor Behind Artificial Intelligence" by Adrienne Williams, Milagros Miceli, and Timnit Gebru (Noema; Oct. 13, 2022) “Effective Altruism is Push a Dangerous Brand of ‘AI Safety’” by Timnit Gebru (Wired; Nov. 30, 2022) Datasheets for Datasets by Timnit Gebru, et al. (CACM; Dec. 2021) “In Emergencies, Should You Trust a Robot?” by John Toon (Georgia Tech; Feb. 29, 2016) “We read the paper that forced Timnit Gebru out of Google. Here’s what it says” by Karen Hao (MIT Technology Review; Dec. 4, 2020) “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by Timnit Gebru, et al. (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; March 2021)  Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts.Subscribe for free. Be the first to hear the next episode of The Gray Area. Subscribe in your favorite podcast app.Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcastsThis episode was made by:  Producer: Erikk Geannikis Editor: Amy Drozdowska Engineer: Patrick Boyd Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
10 snips
Mar 23, 2021 • 49min

Googlers vs. Google

On December 2nd, 2020, Dr. Timnit Gebru - co-lead of Google’s Ethical AI team - got an email that said Google had accepted her resignation. A resignation she didn’t think she made. Her exit is just the latest sign of the crisis unfolding within Google — a loss of trust between many of its employees and leadership. This week, what led to Gebru’s exit - and what it means for us, Google’s users. Because when enough people who work inside Google don't even trust each other -- how can we? Hosts: Shirin Ghaffary (@shiringhaffary) and Alex Kantrowitz (@kantrowitz) Enjoyed this episode? Rate us ⭐⭐⭐⭐⭐and leave a review on Apple Podcasts. Want to get in touch? Tweet @recode. Subscribe for free. Be the first to hear next week's episode by subscribing in your favorite podcast app. Learn more about your ad choices. Visit podcastchoices.com/adchoices
undefined
6 snips
Apr 19, 2024 • 1h 1min

Episode 30: Marc's Miserable Manifesto, April 1 2024

Dr. Timnit Gebru analyzes Marc Andreessen's AI manifesto, critiquing techno-optimism and discussing colonization, anarcho-capitalism in tech, and safety concerns. They delve into topics like DrugGPT for medicine, wearable AI devices, and the influence of Silicon Valley ideals in the AI realm.
undefined
5 snips
Jan 18, 2024 • 1h 1min

AI Hype Distracted Us From Real Problems w/ Timnit Gebru

Timnit Gebru, founder of the Distributed AI Research Institute, discusses AI hype, the influence of tech companies on regulation, and tech's connection to Israel's military campaign in Gaza. They cover topics such as discriminatory AI systems, labor exploitation, OpenAI drama, and the involvement of tech companies in military contracts.
undefined
5 snips
Sep 9, 2022 • 30min

40,000 Recipes for Murder

Two scientists realize that the very same AI technology they have developed to discover medicines for rare diseases can also discover the most potent chemical weapons known to humankind. Inadvertently opening the Pandora’s Box of WMDs. What should they do now? Special thanks to, Xander Davies, Timnit Gebru, Jessica Fjeld, Bert Gambini and Charlotte HsuEpisode Credits: Reported by Latif NasserProduced by Matt KieltyOriginal music and sound design contributed by Matt KieltyMixing help from Arianne WackFact-checking by Emily KriegerCITATIONS:Articles:Read the Sean and Fabio’s paper here. Get Yan Liu’s book Healing with Poisons: Potent Medicines in Medieval China here. Yan is now Assistant Professor of History at the University at Buffalo.Our newsletter comes out every Wednesday. It includes short essays, recommendations, and details about other ways to interact with the show. Sign up (https://radiolab.org/newsletter)!Radiolab is supported by listeners like you. Support Radiolab by becoming a member of The Lab (https://members.radiolab.org/) today.Follow our show on Instagram, Twitter and Facebook @radiolab, and share your thoughts with us by emailing radiolab@wnyc.org.   Leadership support for Radiolab’s science programming is provided by the Gordon and Betty Moore Foundation, Science Sandbox, a Simons Foundation Initiative, and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.
undefined
4 snips
Jan 6, 2020 • 50min

Trends in Fairness and AI Ethics with Timnit Gebru - #336

Research scientist Timnit Gebru discusses trends in fairness and AI ethics, highlighting the diversification of NeurIPS with groups like Black in AI. They explore the evolution of ethics and fairness in AI, balancing democratization and complexity in AI tools, and the debate on whether fairness work should intersect with activism and diversity efforts.
undefined
Dec 15, 2020 • 24min

Was This Google Ethicist Fired for Doing Her Job?

Timnit Gebru, renowned AI ethics researcher and co-founder of Black in AI, discusses the controversy surrounding her departure from Google, focusing on topics like ethics, AI, and racism in the tech industry. She also delves into the significance of representation in AI research and expresses frustration with Google's lack of action and diversity initiatives.