RoboPapers cover image

Ep#14 VERTIFORMER: A Data-Efficient Multi-Task Transformer on Vertically Challenging Terrain

RoboPapers

00:00

Enhancing Transformer Efficiency Through Unified Representation

This chapter explores a novel approach to improve data efficiency in transformers by focusing on temporal relationships rather than intramodality. It also discusses the concept of missing modality infertility, enabling multitasking through input masking to predict actions or poses.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app