TalkRL: The Reinforcement Learning Podcast cover image

Natasha Jaques 2

TalkRL: The Reinforcement Learning Podcast

CHAPTER

Is There a Distractor in Deep Learning?

I think the models that we have right now aren't very good at like ignoring thing distract stuff. We need more symbolic representations where we can generalize representation to understand that like a truck with Hey on it is still fundamentally a truck. I do think there's something promising about models that integrate language speaking of why I want to put language agents that actually puts an actual language representation into an RL agent.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner