Emily: We have a mixture of experts meets instruction tuning, a winning combination for large language models. Emily is standard kind of technique that adds learnable parameters to any type of model really without increasing inference cons. And yeah, this is an empirical study that shows that having mixture of experts combined with instruction tuning outperforms dense models. This is huge because as we we want our LLMs to be as powerful as possible but don't want them to be as expensive as possible.
Our 130th episode with a summary and discussion of last week's big AI news!
Co-hosted this week by Jon Krohn of the Super Data Science Podcast podcast.
Correction: Elon Musk's company is named xAI, not x.AI.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai
Timestamps + links:
- (00:00) Intro / Banter
- (07:30) Response to listener comments / corrections
- Tools & Apps
- Applications & Business
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- Synthetic Media & Art
- (01:44:20) Outro