2min chapter

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Transformers On Large-Scale Graphs with Bayan Bruss - #641

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

CHAPTER

The Importance of Interpretability in Neural Architectures

The paper uses those around the term feature, but everything's a feature in these systems to sub-dupery or another. Are they individual dimensions or are they pairs or tuples of dimensions that kind of carry the meaning here? That's a great question. So we started looking at individual dimensions and asking exactly as you say, like for pick a single element in the given embedding,. What can we learn about what that embedding captures among all the dimensions? And we can talk about how we go about doing that.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode