

Ep 71: CEO of TurboPuffer Simon Eskildsen on Building Smarter Retrieval, AI App Must-Have Features & Current State of Vector DBs
72 snips Jul 22, 2025
In this discussion, Simon Eskildsen, the co-founder and CEO of TurboPuffer, shares his insights on the challenges and advancements in AI infrastructure. Drawing from his decade at Shopify, he highlights the limitations of traditional databases for AI applications. Simon introduces the SCRAP framework, emphasizing scale, cost, recall, and performance. He delves into the rise of object storage and the complexities of vector search technology, advocating for smarter retrieval systems to enhance data efficiency and performance.
AI Snips
Chapters
Transcript
Episode notes
SCRAP Framework for AI Retrieval
- The SCRAP framework outlines Scale, Cost, Recall, Access Control, and Performance as key challenges in AI-native data retrieval.
- Large context windows alone cannot meet these needs at massive scale and with data permissions.
Why Object Storage Now?
- Advances in NVMe SSDs, S3 consistency, and compare-and-swap enable object storage-native databases.
- This architecture offers cost efficiency and scale but trades off higher write latency suitable for search workloads.
Database Choice By Scale
- Use relational databases for transactional, permissioned data and vector extensions at small scale.
- For hundreds of millions to billions of vectors, consider specialized databases like TurboPuffer for cost and performance.