
Natasha Jaques 2
TalkRL: The Reinforcement Learning Podcast
00:00
Is There a Distractor in Deep Learning?
I think the models that we have right now aren't very good at like ignoring thing distract stuff. We need more symbolic representations where we can generalize representation to understand that like a truck with Hey on it is still fundamentally a truck. I do think there's something promising about models that integrate language speaking of why I want to put language agents that actually puts an actual language representation into an RL agent.
Transcript
Play full episode