Nicolay here,
Most AI coding tools obsess over automating everything. This conversation focuses on the right
balance between human skill and AI assistance - where manual context beats web search every time.
Today I have the chance to talk to Ben Holmes, a software engineer at Warp, where they're building the
AI-first terminal.
Manual context engineering trumps automated web search for getting accurate results from
coding assistants.
Key Insight Expansion
The breakthrough insight is brutally practical: manual context construction consistently outperforms
automated web search when working with AI coding assistants. Instead of letting your AI tool search
for documentation, find the right pages yourself and feed them directly into the model's context
window.
Ben demonstrated this with OpenAI's Realtime API documentation - after an hour of
back-and-forth
with web search, he manually found the correct API signatures and saved them as a reference file.
When building new
features, he attached this curated documentation directly, resulting in immediate
success rather than repeated failures from outdated or incorrect search results.
This approach works because you can verify documentation accuracy before feeding it to the AI, while
web search often returns the first result regardless of quality or recency.
In the podcast, we also touch on:
Why React Native might become irrelevant as AI translation between native languages improves
Model-specific strengths: Gemini excels at debugging while Claude dominates f
unction calling
The skill of working without AI assistance - "raw dogging" code for deep learning
Warp's architecture using different models for planning (O1/O3) vs. coding (Claude/Gemini)
💡 Core Concepts
Manual Context Engineering: Curating documentation, diagrams, and reference materials directly
rather than relying on automated web search.
Model-Specific Workflows: Matching AI models to their strengths - O1 for planning, Claude for
f
unction calling, Gemini for debugging.
Raw Dog Programming: Coding without AI assistance to build f
undamental skills in codebase
navigation and problem-solving.
Agent Mode Architecture: Multi-model system where Claude orchestrates task distribution to
specialized agents through f
unction calls.
📶 Connect with Ben:
Twitter/X, YouTube, Discord (Warp Community), Website
📶 Connect with Nicolay:
LinkedIn, X/Twitter, Bluesky, Website, nicolay.gerold@gmail.com
⏱ Important Moments
React Native's Potential O
bsolescence: [08:42] AI translation between native languages could
eliminate cross-platform frameworks
Manual vs Automated Context: [51:42] Why manually curating documentation beats AI web
search
Raw Dog Programming Benefits: [12:00] Value of coding without AI assistance during Ben's first
week at Warp
Model-Specific Strengths: [26:00] Gemini's superior debugging vs Claude's speculative code
fixes
OpenAI Desktop App Advantage: [13:44] O
utperforms Cursor for reading long files
Warp's Multi-Model Architecture: [31:00] How Warp uses O1/O3 for planning, Claude for
orchestration
Function Calling Accuracy: [28:30] Claude outperforms other models at chaining f
unction calls
AI as Improv Partner: [56:06] Current AI says "yes and" to everything rather than pushing back
🛠 Tools & Tech Mentioned
Warp Terminal, OpenAI Desktop App, Cursor, Cline, Go by Example, OpenAI Realtime API, MCP
📚 Recommended Resources
Warp Discord Community, Ben's YouTube Channel, Go Programming Documentation
🔮 What's Next
Next week, we continue exploring production AI implementations with more insights into getting
generative AI systems deployed effectively.
💬 Join The Conversation
Follow How AI Is Built on YouTube, Bluesky, or Spotify. Discord coming soon!
♻ Building the platform for engineers to share production experience. Pay it forward by sharing with
one engineer facing similar challenges.
♻