1min snip

AI + a16z cover image

ARCHIVE: Open Models (with Arthur Mensch) and Video Models (with Stefano Ermon)

AI + a16z

NOTE

Efficient Model with Spass Mixture of Experts

Mixed studies introduce Spass mixture of experts, a technology where dense layers of a transformer are duplicated and assigned to token experts. Each token is routed to specific experts for processing, resulting in lower parameter execution at 12 billion per token out of 46 billion total parameters. This approach enhances performance, latency, throughput, and efficiency, surpassing even highly compressed 12 billion parameter dense transformers. Spass mixed experts prove to be more efficient during both training and inference phases.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode