
Greg Brockman: OpenAI and AGI
Lex Fridman Podcast
Scaling Language Models and Reasoning
Scaling up language models like GPT-2 is unlikely to result in full-fledged reasoning abilities, as the type signature for thinking involves spending variable amounts of compute to arrive at better answers, a process not encoded in GPT-2. Small tweaks to the language model's process, such as generating whole sequences of thoughts and keeping only the final bit, may be necessary. Additionally, reasoning seems linked to out-of-distribution generalization, enabling refinement of mental models for unexperienced scenarios.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.