

Are LLMs safe?
34 snips Feb 29, 2024
Exploring the safety of Large Language Models (LLMs) with insights on model optimization, customization challenges, quality filters, student newspaper content analysis, biases in data curation, adaptive pre-training, model merging inefficiencies, and decentralized training frameworks for enhanced performance.
AI Snips
Chapters
Transcript
Episode notes
Current LLM Issues
- Current large language models (LLMs) are impressive but raise important issues.
- These include legal risks from training data, harmful behavior reproduction, and high scaling costs.
Quality Filter Bias
- Suchin analyzed high school newspapers and how quality filters impact data inclusion.
- Quality filters favored well-resourced schools, correlating quality with similarity to mainstream publications.
No General-Purpose Model
- Quality filters in LLMs inadvertently suppress valid, non-mainstream voices.
- There's no truly "general-purpose" model due to inherent biases in data curation.