4min chapter

Machine Learning Street Talk (MLST) cover image

OpenAI GPT-3: Language Models are Few-Shot Learners

Machine Learning Street Talk (MLST)

CHAPTER

The Real Problem Is Biasing

We need to stratify our language models to make sure that blue cars are represented and diversely. I've seen a paper recently that makes a credible case that regularization can be one of the drivers of bias in models. We should frankly separate the subfields here because one says these models by some how we train them aren't representing the data as it occurs in the world like they are not representing the cumulative distribution of data. The other field is saying here is a state that I want to achieve how can I make the model achieve that so in the one case you can legitimately speak about de-biasing or unbiased thing in the other case it's actually biasing.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode