AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Enforcing LLM Safety with the Guardrails Project
This chapter discusses the role of the guardrails project in ensuring safety for production applications of LLM. It explains how the project enforces correctness and quality criteria for machine learning models, with a catalog of validators and the ability to write custom checks and rules.