The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Transformers On Large-Scale Graphs with Bayan Bruss - #641

21 snips
Aug 7, 2023
Bayan Bruss, Vice President of Applied ML Research at Capital One, dives into groundbreaking research on applying machine learning in finance. He discusses two key papers presented at ICML, focusing on interpretability in image representations and the innovative global graph transformer model. Listeners will learn about tackling computational challenges, the balance between model sparsity and performance, and the significance of embedding dimensions. With insights into advancing deep learning techniques, this conversation opens new avenues for efficiency in machine learning.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Interpretability Challenges of Embeddings

  • Traditional model features were hand-engineered, providing an intuitive understanding of their meaning.
  • Embedding dimensions lack this interpretability, making it difficult to understand their contribution to model predictions.
INSIGHT

Combining Embedding Dimensions for Interpretability

  • Individual embedding dimensions often have low interpretability.
  • Combining multiple dimensions creates more interpretable subspaces, reflecting how neural networks encode information.
INSIGHT

Contrastive Concept Extraction

  • The technique identifies highly activating image crops for each embedding dimension.
  • Contrastive analysis with lowly activating images refines the understanding of each dimension's meaning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app