The Gradient: Perspectives on AI cover image

Alex Tamkin on Self-Supervised Learning and Large Language Models

The Gradient: Perspectives on AI

00:00

How to Train a Model to Do Two Domains

The contrast of learning based out with Emix does better on continuous domains like audio or images and other domains. One shed that's based more on an LP based objective does better on text. So you see some interesting patterns there. And so we hope to introduce new domains to basically test how general models that people propose are and see whether they do generalize to these more sort of higher impact real-world domains.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app