Machine Learning Street Talk (MLST) cover image

#044 - Data-efficient Image Transformers (Hugo Touvron)

Machine Learning Street Talk (MLST)

00:00

Exploring Positional Encoding and Data-Driven Models in Transformers

This chapter explores the role of positional and geometrical embeddings in transformers, especially within Vision Transformers. It examines various studies on model-based reinforcement learning and the complexities of achieving fully data-driven models while addressing the bias-variance trade-off.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app