NLP Highlights

Are LLMs safe?

34 snips
Feb 29, 2024
Exploring the safety of Large Language Models (LLMs) with insights on model optimization, customization challenges, quality filters, student newspaper content analysis, biases in data curation, adaptive pre-training, model merging inefficiencies, and decentralized training frameworks for enhanced performance.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Current LLM Issues

  • Current large language models (LLMs) are impressive but raise important issues.
  • These include legal risks from training data, harmful behavior reproduction, and high scaling costs.
ANECDOTE

Quality Filter Bias

  • Suchin analyzed high school newspapers and how quality filters impact data inclusion.
  • Quality filters favored well-resourced schools, correlating quality with similarity to mainstream publications.
INSIGHT

No General-Purpose Model

  • Quality filters in LLMs inadvertently suppress valid, non-mainstream voices.
  • There's no truly "general-purpose" model due to inherent biases in data curation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app