4min chapter

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

CHAPTER

Generalization in Machine Learning

In machine learning, we often talk about inductive biases in terms of the syntax of the architecture. One of the interesting things with the large language model is it allows you to kind of use inductive biases that were much easier to use in the form of a natural language. So like, for example, Occam's razor. We ran an experiment where we asked the large languagemodel with the pairwise discovery to say like, okay, give us the best argument you can for the best argument for why A causes B. And so in that case, we got lower up we got obviously lower accuracy than GPT four. But considering the fact that we know that's the benchmark was memorized

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode