The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Can Language Models Be Too Big? 🦜 with Emily Bender and Margaret Mitchell - #467

Mar 24, 2021
Join linguist Emily M. Bender and AI researcher Margaret Mitchell as they unravel the complex implications of large language models. They discuss the environmental cost of training these models and the biases they perpetuate, highlighting the need for ethical AI practices. The duo emphasizes the importance of addressing language's impact on identity and the risks of misconceptions in AI interactions. With a focus on transparency and the importance of documentation, Bender and Mitchell advocate for a thoughtful approach to building responsible AI systems.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LM Evolution and Risks

  • Language models (LMs) evolved from re-ranking tools in speech recognition to standalone task performers.
  • This shift, driven by neural nets and transformers, expanded LM applications but introduced new risks.
INSIGHT

Environmental Costs of LLMs

  • Training large language models (LLMs) incurs significant environmental costs, raising ethical concerns.
  • These costs disproportionately affect marginalized communities who don't benefit from LLMs.
ADVICE

Documenting LLM Costs

  • Document the environmental costs of LLM experiments, including energy consumption and carbon footprint.
  • Transparency is crucial for responsible AI development and informed decision-making.
Get the Snipd Podcast app to discover more snips from this episode
Get the app