High Agency: The Podcast for AI Builders cover image

What comes after Open AI? Logan Kilpatrick on how you should prepare for the future of LLMs

High Agency: The Podcast for AI Builders

CHAPTER

Implications of 2.5 Million Token Context Length in AI Models

Exploring factors such as latencies and costs in AI models with a large context length, highlighting the importance of context caching for efficiency and reduced expenses. Discussing strategies for shortening context to improve interactions and introducing the more cost-effective Flash model compared to other advanced models.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner