2min snip

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 cover image

WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

NOTE

Interpretability and Complexity of Operator Language in Models

The interpretability of models involves looking at the expressiveness of their representation and understanding how much compute, units, and memory are needed to represent the problem. Models implement a complex operator language that may not be human-readable, requiring automated processes for reverse engineering. A key question is whether this operator language will have a finite set of categories or if it will constantly evolve to encompass new proofs and concepts. The trajectory of physics suggests a finite language, while the human mind raises the question of whether new understandings stem from recombining existing elements or developing entirely new representations.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode