1min snip

Training Data cover image

OpenAI's Noam Brown, Ilge Akkaya and Hunter Lightman on o1 and Teaching LLMs to Reason Better

Training Data

NOTE

Think Longer, Achieve More

Extending the duration that AI models have to process and think leads to significant emergent abilities, such as backtracking and self-correction, that enhance model performance. This clean and scalable approach reveals a clear path for further advancements. The emphasis on 'test time compute' underscores its transformative potential; allowing models more time to think consistently results in improved outcomes, showcasing the importance of re-evaluating and maximizing resources during the model's operational phase.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode