AXRP - the AI X-risk Research Podcast cover image

22 - Shard Theory with Quintin Pope

AXRP - the AI X-risk Research Podcast

00:00

The Power of Self-Supervised Learning

RL fine-tuning is a more efficient way of doing that conditional sampling where it's like incorporated into the generative prior by default. In theory it's equivalent to doing sampling from the purely self-supervised generative model conditional on having a high reward from the rewards circuitry yeah. i think this is basically the same sort of thing but implemented in a way that makes it sometimes appropriate for certain types of specific problems or use cases or things like this.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app