5min chapter

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 cover image

LLMs Everywhere: Running 70B models in browsers and iPhones using MLC — with Tianqi Chen of CMU / OctoML

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

CHAPTER

The Future of Compiler Compilation

Machine learning compilation is already kind of a start field. We are not restricted by, you know, the libraries that they have to offer. That's why if I'm able to run Apple M2, a Web GPU where there's no library available, because we are kind of like an automatic generating libraries. It makes it easier to support less well supported hardware from a runtime perspective. AMD, I think before they are a log cam driver, was not very well supported. Recently, they are getting good. But even before that, we will be able to support AMD through thisGPU graphics back, kind of a Vulkan,. which is not as performant, but gives your data center portability

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode