
801: Merged LLMs Are Smaller And More Capable, with Arcee AI's Mark McQuade and Charles Goddard
Super Data Science: ML & AI Podcast with Jon Krohn
00:00
Enhancing Language Models Through Model Merging and Innovative Training Approaches
This chapter delves into the innovative approach of model merging using RC's open source merge kit to combine various strengths of LLMs without bloating the parameter count. They introduce the spectrum project that enables training of an LLM at a significantly reduced cost by strategically freezing specific modules, and also discuss techniques such as mixture of agents, mixture of experts, and sparse upcycling to boost the effectiveness and performance of language models.
Transcript
Play full episode