Latent Space: The AI Engineer Podcast cover image

WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

Latent Space: The AI Engineer Podcast

NOTE

Interpretability and Complexity of Operator Language in Models

The interpretability of models involves looking at the expressiveness of their representation and understanding how much compute, units, and memory are needed to represent the problem. Models implement a complex operator language that may not be human-readable, requiring automated processes for reverse engineering. A key question is whether this operator language will have a finite set of categories or if it will constantly evolve to encompass new proofs and concepts. The trajectory of physics suggests a finite language, while the human mind raises the question of whether new understandings stem from recombining existing elements or developing entirely new representations.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner