The Inside View cover image

David Krueger–Coordination, Alignment, Academia

The Inside View

00:00

The Importance of Understanding Causality

There's this argument from my paper on risk extrapolation that if you train your foundation model offline, it's probably going to be causally confused. And so then that's an argument for saying actually it is going to understand causality. But we have another project that's sort of about that saying, "Is it then going to actually like use it to reason about things?" So the question here is whether or not a model can recognize which types of information are good sources and how they should be used.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app