NVIDIA AI Podcast cover image

Lowering the Cost of Intelligence With NVIDIA's Ian Buck - Ep. 284

NVIDIA AI Podcast

00:00

What mixture-of-experts (MoE) means

Ian Buck defines MoE: splitting models into experts to activate only needed neurons, lowering token cost.

Play episode from 01:04
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app