Machine Learning Street Talk (MLST) cover image

Decompiling Dreams: A New Approach to ARC? - Alessandro Palmarini

Machine Learning Street Talk (MLST)

00:00

Optimizing Learning: Balancing Data and Computational Efficiency

This chapter explores the differences between stochastic gradient descent and program search in deep learning, focusing on their efficiency and data requirements. It introduces a new combinatorial search method that adapts based on feedback, aiming to optimize the learning process.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app