Explore the novel concept of selective language modeling introduced in the R-H-O-1 research paper, focusing on training on useful tokens aligned with distribution for faster model optimization. The chapter challenges conventional training methods by proposing selective training based on token learning progress.
Our 163rd episode with a summary and discussion of last week's big AI news!
Note: apology for this one coming out a few days late, got delayed in editing it -Andrey
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai
Timestamps + links:
- Intro / Banter
- Tools & Apps
- Applications & Business
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- Synthetic Media & Art