2min snip

Dwarkesh Podcast cover image

Will scaling work? [Narration]

Dwarkesh Podcast

NOTE

Continuous Learning and the Evolution of Mental Models

Continuous learning involves being open to missing crucial evidence, such as limited research releases from DEI labs hindering insights into building AGI. The absence of valuable publications has led to assumptions about shortened timelines due to unknown factors. The speaker, a podcaster, acknowledges limitations in understanding certain topics, impacting scaling implications. The concept of grokking, akin to human learning processes, involves evolving mental models through observation and intuition. Training large models to be overfit on data leads to development of more efficient learning methods. Gradient descent on diverse data fosters the selection of general circuits, culminating in grokking and eventual insight-based learning.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode