AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Minimizing Attack Surface and Using Guardrails for Model Output
By restricting the model's responses to a few ways and ensuring it responds in an adjacent format, the attack surface is reduced as the model's output is constrained. Guardrails, although restrictive, limit the model's output within a tight box. Another approach is using a large database of likely user outputs for the model to generate answers from, ensuring safety by providing the closest answer available in the database upon query.