Ken Stanley, Senior VP at Lila Sciences and former OpenAI researcher, dives into the complexities of AI in this insightful discussion. He explores the Fractured Entanglement Representation hypothesis, challenging traditional understandings of neural networks. The Picbreeder experiment showcases user-driven creativity, while the balance between modular and entangled representations raises questions about AI evolution. Stanley also highlights the potential of Universal Feature Representation (UFR) and the significance of scaling considerations in future AI development.
56:53
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Representation vs Performance Disconnect
Neural networks can give correct outputs while hiding poor internal representations.
We propose fractured entangled representation (FER) may be present in today's LLMs and deserves investigation.
insights INSIGHT
Messy Representations Limit Creativity
Poor internal structure limits imagination, continual learning, and efficient generalization.
Ken Stanley shows that messy representations make future learning more expensive and brittle.
insights INSIGHT
Fracture Versus Entanglement Defined
Fracture means failure to reuse shared information, entanglement means unrelated components become mixed.
Both break modularity and hinder plug-and-play compositional reasoning in networks.
Get the Snipd Podcast app to discover more snips from this episode
Jim talks with Ken Stanley about the Fractured Entanglement Representation hypothesis in deep learning neural networks. They discuss open-endedness in AI systems & evolution, the Picbreeder experiment & its significance, the objective paradox of finding things by not looking for them, comparisons between Picbreeder & SGD networks, visual differences in internal representations, weight sweep experiments, modular vs tangled decomposition, implications for creativity & continual learning & generalization abilities, Unified Factored Representation as an alternative to FER, the relationship to grokking in neural networks, scaling considerations & evidence in larger models, potential methods to achieve UFR, connections to biological evolution and DNA representation, and much more.
Episode Transcript
Why Greatness Cannot Be Planned: The Myth of the Objective, by Kenneth Stanley and Joel Lehman
"Questioning Representational Optimism in Deep Learning: The Fractured Entanglement Representation Hypothesis" by Akarsh Kumar, Jeff Clune, Joel Lehman, and Kenneth Stanley
JRS EP137 - Ken Stanley on Neuroevolution
JRS EP130 - Ken Stanley on Why Greatness Cannot Be Planned
Kenneth O. Stanley is the Senior Vice President of Open-Endedness at Lila Sciences. He previously led a research team at OpenAI also on the challenge of open-endedness. Before that, he was Charles Millican Professor of Computer Science at the University of Central Florida and was also a co-founder of Geometric Intelligence Inc., which was acquired by Uber to create Uber AI Labs, where he was head of Core AI research. He is an inventor of popular algorithms including NEAT, novelty search, and CPPNs. He has won more than 10 best paper awards and his original 2002 paper on NEAT also received the 2017 ISAL Award for Outstanding Paper of the Decade 2002 - 2012 from the International Society for Artificial Life. He is also a coauthor of the popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective (published originally in the US by Springer), and has spoken widely on its subject.