The Real Python Podcast cover image

Measuring Bias, Toxicity, and Truthfulness in LLMs With Python

The Real Python Podcast

CHAPTER

Measuring Bias, Toxicity, and Truthfulness in LLMs

The chapter discusses the process of measuring bias, toxicity, and truthfulness in LLMs using Python. They talk about using a dataset and evaluating the sentiment of completed sentences to measure bias. They also touch upon using the evaluate package to assess the degree of truthfulness and checking for incorrect answers in truthful QA. They highlight the availability of datasets and packages on Hugging Face for performing these assessments.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner