
SE Radio 610: Phillip Carter on Observability for Large Language Models
Software Engineering Radio - the podcast for professional software developers
Enhancing Model Behavior through Prompt Engineering and Observability
The chapter explores prompt engineering in language models, focusing on programmatically generating prompts, parameterizing examples, and influencing model behavior. It emphasizes the role of observability in replaying requests, editing prompts systematically for better outcomes, and analyzing changes to enhance prompt quality and system behavior. The discussion covers implementing observability for large language models, tracking errors, and the need for improved tools and practices in observability engineering for better system performance.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.