The Information Bottleneck cover image

EP13: Recurrent-Depth Models and Latent Reasoning with Jonas Geiping

The Information Bottleneck

00:00

Determinism, Training Variance, and The Silicon Lottery

Hosts note nondeterminism across hardware and runs, but suggest averaging effects may still make scaling predictions useful.

Play episode from 01:09:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app