Latent Space: The AI Engineer Podcast cover image

WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

Latent Space: The AI Engineer Podcast

00:00

Interpretability and Complexity of Operator Language in Models

The interpretability of models involves looking at the expressiveness of their representation and understanding how much compute, units, and memory are needed to represent the problem. Models implement a complex operator language that may not be human-readable, requiring automated processes for reverse engineering. A key question is whether this operator language will have a finite set of categories or if it will constantly evolve to encompass new proofs and concepts. The trajectory of physics suggests a finite language, while the human mind raises the question of whether new understandings stem from recombining existing elements or developing entirely new representations.

Play episode from 34:01
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app