The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Spiking Neural Networks: A Primer with Terrence Sejnowski - #317

Nov 14, 2019
Terrence Sejnowski, a pioneer in computational neuroscience and head of the Computational Neurobiology Lab at the Salk Institute, joins to unravel the complexities of spiking neural networks. He discusses how these networks mimic biological brain functions, boosting energy efficiency in machine learning. The conversation also delves into the challenges of training these networks, the synergy between neuroscience and AI, and their transformative potential in robotics. Sejnowski shares insights on the future of neuromorphic hardware and its implications for technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Brain vs. Deep Learning Efficiency

  • Biological brains operate remarkably efficiently on just 20 watts.
  • Deep learning systems, in contrast, consume vast amounts of energy, a growing concern.
INSIGHT

Spiking as a Signaling Method

  • Spikes, or action potentials, are the brain's primary signaling method, lasting about a millisecond.
  • These signals travel relatively slowly compared to computer chips but encode information effectively.
INSIGHT

Complexity of Spikes and Synapses

  • Neural spikes, while digital in their all-or-none nature, exhibit analog timing and varied sizes and shapes.
  • Synapses introduce further complexity with dynamic systems influencing signal strength and modulation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app