

929: Dragon Hatchling: The Missing Link Between Transformers and the Brain, with Adrian Kosowski
38 snips Oct 7, 2025
Join researcher Adrian Kosowski, known for leading the biologically inspired AI architecture at Pathway, as he explores groundbreaking advancements in AI. He dives into how the Dragon Hatchling model merges attention mechanisms with Hebbian learning to mimic brain functions. Discover the concept of unlimited context windows and the innovative sparse positive activations that set BDH apart from traditional transformers. Adrian also discusses the future of multilingual models and the potential for lifelong learning, making AI more human-like in reasoning.
AI Snips
Chapters
Books
Transcript
Episode notes
BDH As A Post‑Transformer State‑Space
- BDH is a post-transformer, state-space architecture that blends attention with biologically plausible neuron models.
- It frames attention as local neuron interactions rather than a global lookup, enabling different implementations of context handling.
Unlimited Context Without Peanut‑Size Memory
- BDH supports effectively limitless context by scaling synapse-like state rather than relying on compressed memory tricks.
- The architecture provides abundant storage and flexible operations to avoid context bottlenecks while remaining efficient.
Sparse Positive Activation Mirrors The Brain
- BDH uses sparse positive activations where ~95% of neurons are silent, mirroring biological brains' energy efficiency.
- This sparse-positive regime enables reasoning-scale behavior at lower compute compared to dense transformer activations.