
Lowering the Cost of Intelligence With NVIDIA's Ian Buck - Ep. 284
NVIDIA AI Podcast
00:00
What mixture-of-experts (MoE) means
Ian Buck defines MoE: splitting models into experts to activate only needed neurons, lowering token cost.
Play episode from 01:04
Transcript


