Generally Intelligent cover image

Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

Generally Intelligent

CHAPTER

Is Your Model Not Reasoning?

I would measure reasoning by looking at our distribution generalization or systematic generalization. If the model can do well, then that's strong evidence to suggest that it's doing anything. Can you get a model to sort of evaluate the information that it contains and check if things are consistent with each other? Right now there's no such capability as far as I can see, you can make the model say anything. The step by step saying work sometimes but it's not very robust right.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner