The Gradient: Perspectives on AI cover image

Alex Tamkin on Self-Supervised Learning and Large Language Models

The Gradient: Perspectives on AI

00:00

The Benefits of Self Supervised Learning

We basically come up with the simplest possible constraint on the difficulty of the perturbation, which just like been really, you know, studied a lot in adversarial robustness incidentally. And that's just like an L one norm constraint, which basically says we can output a perturbation. But overall, it has to sort of add up to a certain delta. We found a limit of how much it can match exactly. So if you want, you can draw with a sharpie over like part of the image, or you can Draw with highlighter over all of the image,. but you can't do both. It actually led to some pretty good results. They were able to

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app