AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Generalization in Machine Learning
In machine learning, we often talk about inductive biases in terms of the syntax of the architecture. One of the interesting things with the large language model is it allows you to kind of use inductive biases that were much easier to use in the form of a natural language. So like, for example, Occam's razor. We ran an experiment where we asked the large languagemodel with the pairwise discovery to say like, okay, give us the best argument you can for the best argument for why A causes B. And so in that case, we got lower up we got obviously lower accuracy than GPT four. But considering the fact that we know that's the benchmark was memorized