NLP Highlights cover image

76 - Increasing In-Class Similarity by Retrofitting Embeddings with Demographics, with Dirk Hovy

NLP Highlights

00:00

How to Improve Linear Separability in Graph Convolutional Networks

In this case, we're giving a neural network or a classifier in general a leg up by making the classes more linearly separable and thereby basically infusing some outside information into the representations. Now you could do the same thing within a network in the class of graph convolutional networks. You're essentially learning this retrofitting matrix to the one step at a time as part of the training process. But it takes longer. It's more costly. So what I wonder about is linear separability because what you're doing in the end is a linear transformation on the same data space. Well, what it does is it increases the in class similarity and it should increase the separability or

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app