

Programming with LLMs
11 snips Feb 19, 2025
David Crawshaw, co-founder of Tailscale and expert in large language models, shares his year-long journey integrating LLMs into programming. He discusses their productivity boosts and practical benefits, reflecting on the evolving role of AI tools in development workflows. The conversation also touches on challenges in customizing LLMs for user needs, the synergy between Go and LLMs, and the psychological nuances of engaging with AI. Crawshaw offers valuable insights into the broader implications of adopting this technology in programming.
AI Snips
Chapters
Transcript
Episode notes
LLM Exploration at Tailscale
- David Crawshaw, CTO of Tailscale, actively explored LLMs in programming for a year.
- He found LLMs beneficial for personal use but unsuitable for Tailscale's product.
LLMs and Tailscale's Utility
- Tailscale benefits LLM users by simplifying multi-cloud networking for GPU access.
- Its existing features are sufficient, negating the need for LLM integration within Tailscale itself.
Tailscale's "Boring" Success
- Jerod Santo uses Tailscale's free tier for his home lab, highlighting its ease of use.
- He notes its "boring" nature due to its seamless functionality and lack of required configuration.