The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

NOTE

Enriching Spatial Information with Self Attention

Spatial information in the form of bounding boxes and token IDs is processed by projecting into an embedding space and enriched using self attention to create a disentangled space of semantics similar to textual information. Self attention in spatial information helps in understanding the layout semantics, such as identifying headers based on the size of bounding boxes. Both spatial and textual representations are eventually projected into the same space for addition, with larger hidden representations being orthogonal to avoid confusion in the model. The addition of spatial and textual representations together does not lead to data loss, ensuring effective integration of spatial and textual information.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner