

927: Automating Code Review with AI, feat. CodeRabbit’s David Loker
51 snips Sep 30, 2025
David Loker, Director of AI at CodeRabbit, discusses automating code reviews with AI, emphasizing its role in improving developer workflows. He explains how CodeRabbit offers real-time feedback, tackling the challenges of agentic AI and context engineering. Loker shares insights on 'vibe coding', the future of AI creativity, and the importance of privacy with zero-data-retention policies. He also contrasts traditional and modern coding productivity measures, advocating for a holistic approach to developer satisfaction beyond just lines of code.
AI Snips
Chapters
Books
Transcript
Episode notes
AI Speeds Reviews While Preserving Quality
- CodeRabbit speeds PR-to-merge time by automating repetitive review tasks and preserving quality when AI generates code.
- The tool reduces human lift so engineers can focus on building rather than tedious line-by-line checks.
Pipeline Versus Agentic AI Balance
- Agentic AI gives LLMs tools and a looped planning-action cycle, while pipeline AI prescribes fixed ordered steps.
- Hybrid systems combine both to gain flexibility with controllable guardrails for reliability.
Engineer Context; Prune Irrelevant Data
- Engineer context deliberately: feed LLMs intent, dependencies, linked issues, and related docs to reduce guesswork.
- Prune irrelevant data because larger context windows can degrade LLM focus and accuracy.