The Real Python Podcast cover image

Measuring Bias, Toxicity, and Truthfulness in LLMs With Python

The Real Python Podcast

00:00

Measuring Bias, Toxicity, and Truthfulness in LLMs

The chapter discusses the process of measuring bias, toxicity, and truthfulness in LLMs using Python. They talk about using a dataset and evaluating the sentiment of completed sentences to measure bias. They also touch upon using the evaluate package to assess the degree of truthfulness and checking for incorrect answers in truthful QA. They highlight the availability of datasets and packages on Hugging Face for performing these assessments.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app