MLOps.community  cover image

Cost/Performance Optimization with LLMs [Panel]

MLOps.community

00:00

How to Improve the Latency of Your OctaML Plugins

I'm really happy to hear what Daniel and they are talking about because like if you're in our world we're just talk like talking to some external API, your hands are so tight. And for example for GPT for shared pool of capacity, like the the latency I'm seeing is something like 100 million per token. So this is something I mean we can really see from the panel that if you want to be really fast you need to move it in house but let's say you're on this first stage where you still need to understand the cost and the speed. There's a small amount of trick like semantic caching and what Mario mentioned that you can do. Maybe Jared you also have

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app