"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

E39: Seeing is Believing with MIT’s Ziming Liu

Jun 27, 2023
Ziming Liu, a Physics PhD student at MIT, explores the intersection of AI and physics. He discusses his research on Brain-Inspired Modular Training (BIMT), enhancing neural network interpretability. The conversation dives into making networks more modular, the benefits of neuron swapping for optimization, and innovative techniques in language processing. Liu emphasizes the need for international collaboration in AI to address safety risks, showcasing how insights from biological systems can inspire advancements in artificial intelligence.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Modularity Motivation

  • Biological neural networks are modular due to evolutionary pressure for energy efficiency.
  • Artificial networks lack this inherent incentive, requiring explicit training techniques for modularity.
INSIGHT

Topological vs. Geometric

  • Standard neural networks are topological, only considering connections, not distances.
  • BIMT introduces geometric space, allowing distance-based penalties, mimicking biological constraints.
ADVICE

Locality Penalty Implementation

  • Use L1 regularization with a distance-dependent penalty to encourage locality.
  • Tune the locality strength hyperparameter to balance sparsity and prediction accuracy.
Get the Snipd Podcast app to discover more snips from this episode
Get the app