Lex Fridman Podcast cover image

Greg Brockman: OpenAI and AGI

Lex Fridman Podcast

00:00

Scaling Language Models and Reasoning

Scaling up language models like GPT-2 is unlikely to result in full-fledged reasoning abilities, as the type signature for thinking involves spending variable amounts of compute to arrive at better answers, a process not encoded in GPT-2. Small tweaks to the language model's process, such as generating whole sequences of thoughts and keeping only the final bit, may be necessary. Additionally, reasoning seems linked to out-of-distribution generalization, enabling refinement of mental models for unexperienced scenarios.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app