Machine Learning Street Talk (MLST)

#59 - Jeff Hawkins (Thousand Brains Theory)

17 snips
Sep 3, 2021
In this engaging discussion, neuroscientist and entrepreneur Jeff Hawkins, known for his Thousand Brains Theory, joins Connor Leahy to unravel how our brains construct reality through a multitude of models. They dive into the role of the neocortex in intelligence and sensory perception, explore Sparse Distributed Representations and their applications in AI, and highlight the key differences between traditional neural networks and Hawkins' innovative ideas. The conversation also touches on the ethical integration of AI with human values and the philosophical implications of emerging technologies.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Thousand Brains Theory

  • The neocortex learns a world model, not in one place, but across thousands of independent cortical columns.
  • These columns, numbering about 150,000, form a collective intelligence, each a complete modeling system.
INSIGHT

Neocortex Structure

  • The neocortex, responsible for intelligence, is surprisingly uniform in structure across regions and species.
  • Its complexity lies in the connections, emerging from a simple learning algorithm, similar to complex patterns in trained neural networks.
INSIGHT

Predictive Learning

  • The neocortex learns by anticipating sensory movement results, predicting tactile experiences based on object knowledge.
  • This predictive ability, a fundamental primitive of cognition, uses reference frames like grid cells for abstract concepts.
Get the Snipd Podcast app to discover more snips from this episode
Get the app