2min snip

Dwarkesh Podcast cover image

John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Dwarkesh Podcast

NOTE

**Pre-training and post-training **

Pre-training involves training a model to imitate all the content on the internet and maximize likelihood by predicting the next token given the previous tokens. This results in a model that can generate content similar to random web pages and assign probabilities to everything. On the other hand, post-training aims to target a narrower range of behavior, such as creating a helpful chat assistant persona. The objective of post-training is to produce outputs that humans will like and find useful, instead of just imitating raw content from the web.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode