Master the art of building AI agents and powerful AI teammates with Diamond Bishop, Director of Engineering and AI at Datadog. In this deep dive, we explore crucial strategies for creating self-improving agent systems, from establishing robust evaluations (evals) and observability to designing effective human-in-the-loop escape hatches. Learn the secrets to building user trust, deciding between prompt engineering and fine-tuning, and managing data sets for peak performance. Diamond shares his expert insights on architecting agents, using LLM as a judge for quality control, and the future of ambient AI in DevSecOps. If you're looking to build your own AI assistant, this episode provides the essential principles and practical advice you need to get started and create systems that learn and improve over time.
Guest: Diamond Bishop, Director of Engineering and AI at Datadog 
Learn more about Bits AI SRE: https://www.datadoghq.com/blog/bits-ai-sre/
Datadog MCP Server for Agents: https://www.datadoghq.com/blog/datadog-remote-mcp-server/
Sign up for A.I. coaching for professionals at: https://www.anetic.co
Get FREE AI tools 
pip install tool-use-ai
Connect with us  
https://x.com/ToolUseAI
https://x.com/MikeBirdTech
https://x.com/diamondbishop
00:00:00 - Intro  
00:03:55 - When To Use an Agent vs a Script  
00:05:44 - How to Architect an AI Agent  
00:08:07 - Prompt Engineering vs Fine-Tuning  
00:11:29 - Building Your First Eval Suite  
00:26:06 - The Unsolved Problem in Agent Building  
00:31:10 - The Future of Local AI Models & Privacy
Subscribe for more insights on AI tools, productivity, and agents.
Tool Use is a weekly conversation with AI experts brought to you by Anetic.