Kieran Klaassen, founder of Cora and an entrepreneur in residence at Avery, shares his innovative journey as a solo engineer harnessing AI agents. He discusses treating AI like junior developers, emphasizing 'compound engineering'—transforming interactions into reusable coding patterns. The conversation highlights strategies for managing pull requests, using Git worktrees to run multiple AI agents, and the importance of effective user feedback. Kieran’s approach shifts the focus from mere code generation to optimizing workflows and enhancing collaboration in software development.
01:12:24
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Manage Agents Like Junior Devs
Manage AI agents like junior developers: define taste, systems, and guardrails rather than micromanaging every step.
Focus on systems thinking, research, and structured review skills to scale with agents.
volunteer_activism ADVICE
Automate And Tier Pull Request Reviews
Treat pull request review as the real bottleneck and invest in tiered checklists and automation.
Use AI to run initial passes, static scanners, and enforce stricter checks for high-risk changes.
volunteer_activism ADVICE
Extract Review Taste Into Commands
Record pair review sessions and extract your taste into reusable prompts and slash-commands.
Feed transcriptions to Claude to turn review habits into automated review commands.
Get the Snipd Podcast app to discover more snips from this episode
Most AI coding conversations focus on which model to use. This one focuses on workflow - the specific commands, git strategies, and review processes that let one engineer ship production code with AI agents doing 80% of the work.
Today I have the chance to talk to Kieran Klaassen, who built Cora (an AI email management tool) almost entirely solo using AI agents.
His approach: treat AI agents like junior developers you manage, not tools you operate.
The key insight centers on "compound engineering" - extracting reusable systems from every code review and interaction. Instead of just reviewing pull requests, Kieran records his review sessions with his colleague, transcribes them, and feeds the transcriptions to Claude to extract coding patterns and philosophical approaches into custom slash commands.
In the podcast, we also touch on:
Git worktrees for running multiple AI agents simultaneously
The evolution from Cursor Composer to Claude Code and Friday
Why pull request review is the real bottleneck, not code generation
How to structure research phases to avoid AI going off the rails
and more
💡 Core Concepts
Compound Engineering: Extracting reusable systems, SOPs, and taste from every AI interaction - treating each code review or feature build as an opportunity to teach the AI your standards and decision-making patterns.
Git Worktrees for AI Agents: Running multiple AI coding agents simultaneously by checking out different branches in separate file system directories, allowing parallel feature development without conflicts.
Research-First AI Development: Starting every feature with a dedicated research phase where AI gathers context, explores multiple approaches, and creates detailed GitHub issues before any code is written.
Tiered Code Review Systems: Implementing different review checklists and standards based on risk level (payments, migrations, etc.) with AI assistants handling initial passes before human review.
The Sonnet 3.5 Breakthrough Moment: [09:30] Kieran describes vibe-coding a Swift app in one evening, realizing AI could support solo entrepreneurship for the first time.
Building Cora's First Prototype: [12:45] One night to build a prototype that drafts email responses - the moment they knew there was something special about AI handling email.
The Nice, France Experiment: [13:40] Testing automatic email archiving while walking around town, discovering the "calm feeling" that became Cora's core value proposition.
Git Worktrees Discovery: [50:50] How Kieran discovered worktrees by asking AI for a solution to run multiple agents simultaneously, leading to his current parallel development workflow.
Cursor 3.7 Breaking Point: [19:57] The moment Cursor became unusable after shipping too many changes at once, forcing the search for better agentic tools.
Friday vs Claude Code Comparison: [22:23] Why Friday's "YOLO mode" and end-to-end pull request creation felt more like having a colleague than using a tool.
Compound Engineering Philosophy: [33:18] Recording code review sessions and extracting engineering taste into reusable Claude commands for future development.
The Research Phase Strategy: [04:48] Why starting with comprehensive GitHub issue research prevents AI agents from going off-rails during implementation.
Pull Request Review Bottleneck: [28:44] How reviewing AI-generated code, not writing it, becomes the main constraint when scaling with agents.
Multiple Agent Management: [48:14] Running Claude Code work trees in parallel terminals, treating each agent as a separate team member with distinct tasks.