MLOps.community  cover image

FrugalGPT: Better Quality and Lower Cost for LLM Applications // Lingjiao Chen // MLOps Podcast #172

MLOps.community

00:00

Approximating Performance with Cache Layer

They discuss the concept of approximating language model performance using a cache layer to serve similar queries, reducing the need to query the model.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app