TalkRL: The Reinforcement Learning Podcast cover image

Aravind Srinivas 2

TalkRL: The Reinforcement Learning Podcast

00:00

Is It Really the Self Supervised Mechanism?

Many seems to use use the transformer as a function approximator and then use more conventional algorithms. Is that also a reasonable approach, do you think? Or is it really the self supervised mechanism that's like the important bet here? A lot of people are adding papers on taking out the CNNs, you know, and using a vision transformer instead. That's definitely going to have the short term progress because anytime you replace the backbone architecture with a different stack, like it's likely to proliferate across like anyone using any CNN anywhere.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app