Generally Intelligent cover image

Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

Generally Intelligent

00:00

Observability Matters in Turns of Wik Ar.

I was a little bit surprised by how well it worked, honestly, because we were not really telling the agent how to make use of this information. It wasn't clear to me if the agent would just sort of learn to predict these explanations and then totally ignore them when it was solving the task. But that turned out not to be true. And in fact, in a setting where just the visual input to response mapping is completely ambiguous about which feature is correct. Ye, that's wel estan. O any ideas? Why? Any i think it comes down to shaping the representations of the agent. The environment with these objects, how easy is it to observe all the objects? And s

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app