Generally Intelligent cover image

Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

Generally Intelligent

00:00

Is Your Model Not Reasoning?

I would measure reasoning by looking at our distribution generalization or systematic generalization. If the model can do well, then that's strong evidence to suggest that it's doing anything. Can you get a model to sort of evaluate the information that it contains and check if things are consistent with each other? Right now there's no such capability as far as I can see, you can make the model say anything. The step by step saying work sometimes but it's not very robust right.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app