
Ep 33: CTO and Co-Founder of Sourcegraph on Current Landscape and Future of Software Development, How to Make RAG Better, and Building Towards the Agentic Future
Unsupervised Learning
Exploring the Shift towards Local Model Inference and Developer Sensitivity to Latency
Exploring the benefits of running inference models locally for privacy, cost, and speed. Emphasizing the importance of fast inference times and quick response rates in AI systems and how local computation enhances the coding experience.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.