Machine Learning Street Talk (MLST) cover image

Decompiling Dreams: A New Approach to ARC? - Alessandro Palmarini

Machine Learning Street Talk (MLST)

CHAPTER

Optimizing Learning: Balancing Data and Computational Efficiency

This chapter explores the differences between stochastic gradient descent and program search in deep learning, focusing on their efficiency and data requirements. It introduces a new combinatorial search method that adapts based on feedback, aiming to optimize the learning process.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner