AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Enriching Spatial Information with Self Attention
Spatial information in the form of bounding boxes and token IDs is processed by projecting into an embedding space and enriched using self attention to create a disentangled space of semantics similar to textual information. Self attention in spatial information helps in understanding the layout semantics, such as identifying headers based on the size of bounding boxes. Both spatial and textual representations are eventually projected into the same space for addition, with larger hidden representations being orthogonal to avoid confusion in the model. The addition of spatial and textual representations together does not lead to data loss, ensuring effective integration of spatial and textual information.