The Ruby AI Podcast

Contracts and Code: The Realities of AI Development

12 snips
Sep 23, 2025
Valentino and Joe dive into the reality behind AI salaries, discussing the disparity between hype and actual compensation. They debate whether any company can truly dominate the LLM market, emphasizing incremental improvements over clear winners. The hosts explore the complexities of benchmarking AI models and the necessity for customized evaluation tools. New OpenAI features that enhance prompt engineering are discussed, alongside the balance between playful experimentation and standardization in Ruby, highlighting its role in AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Hyped Salaries Reflect Positioning

  • AI salary disparities are driven more by market positioning, scarcity, and equity than absolute individual worth.
  • Winning the AI race is unclear because improvements are incremental and competitors quickly catch up.
ADVICE

Benchmark Small, High-Value Targets

  • Start benchmark testing on targeted high-risk areas, not entire large codebases.
  • Focus on high complexity, high dependencies, and high FLOG score files to save cost and time.
INSIGHT

Prompts Behave Like Compiled Code

  • Every LLM has different prompting quirks, so identical prompts yield different outcomes across models.
  • Treat prompts like compiled source that may need reworking per model for best results.
Get the Snipd Podcast app to discover more snips from this episode
Get the app