Gradient Dissent: Conversations on AI cover image

Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next

Gradient Dissent: Conversations on AI

NOTE

**Discussion about model size **

The speaker predicts a reverse of the current trend of building bigger models, suggesting that smaller models might become more prevalent due to the unnecessary size of current models. They question the necessity of large model sizes and suggest that smaller, more optimized models with a focused diet of training data may be more effective. They also raise the question of why text models are so much larger than image models and highlight that the amount of data and optimization play crucial roles in model performance, showing that training and data optimization can lead to better performance than simply increasing model size.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner