The Inside View cover image

Ethan Caballero–Broken Neural Scaling Laws

The Inside View

00:00

The Importance of Recursive Self-Improvement

I disagree about the 100 percent like if you have some like self-improving AI that just gets better and better at lying. I don't buy the like you'll get really really fast recursive self-improvement before a sinister stumble had happened. Do you usually look at your code every hour of your training run? When it's like a machine learning model and like half the training runs are failing. It takes weeks or months to train those kind of things. If we have a sharp left turn then maybe it could be like minutes or hours. The hardware stuff is where it's more unbounded but continuous.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app