
Ep#14 VERTIFORMER: A Data-Efficient Multi-Task Transformer on Vertically Challenging Terrain
RoboPapers
00:00
Enhancing Transformer Efficiency Through Unified Representation
This chapter explores a novel approach to improve data efficiency in transformers by focusing on temporal relationships rather than intramodality. It also discusses the concept of missing modality infertility, enabling multitasking through input masking to predict actions or poses.
Transcript
Play full episode