The Nonlinear Library cover image

AF - Your LLM Judge may be biased by Rachel Freedman

The Nonlinear Library

00:00

Exploring Bias and Mitigation Techniques in Language Models

Exploring the benefits of few-shot prompting in reducing biases in language models, including methods like prompt permutation and human judgment validation. Highlighting biases found in llama 2 testing, particularly responses starting with 'B', and advising caution in using LLMs for qualitative assessments.

Play episode from 09:28
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app