
22 - Shard Theory with Quintin Pope
AXRP - the AI X-risk Research Podcast
00:00
The Complexity of Self-Supervised Learning
GPT4 contains a bunch of extremely complicated learned circuitry. They're complicated in the sense that like the representations track very intricate and not easily predicted aspects of the physical world around you. Like they implement very complicated target mappings between what I previously observed versus what I predict to observe in the near future. And so there's some learning process to make those representations accurate or like maybe they're like probability distributions instead of simple representations.
Transcript
Play full episode