Theoretical Neuroscience Podcast

On origins of computational neuroscience and AI as scientific fields - with Terrence Sejnowski (vintage) - #9

Mar 16, 2024
Delving into the origins of computational neuroscience and AI, the podcast explores the transition from rule-based to learning-based AI approaches. It highlights the unreasonable effectiveness of math in deep learning and the evolution of reinforcement learning in neural structures. The synergy of AI and neuroscience in medical diagnostics, advancements in self-driving technology, and the transformative impact of AI on society are also discussed.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Underestimating Vision

  • MIT AI lab received a grant from DARPA to build a ping pong-playing robot.
  • They forgot to budget for vision, mistakenly assigning it to a student as a summer project, highlighting AI's complexity.
INSIGHT

Brain's Hardware-Software Unity

  • Biological neural networks store memory and process information within their synapses (connections between neurons).
  • Unlike digital computers, the brain's hardware is its software; synapses are both memory and processors.
INSIGHT

Perceptron's Limitations

  • The perceptron, an early neuron model, had limitations in its ability to discriminate complex patterns.
  • Minsky and Papert proved mathematically that single-layer perceptrons couldn't solve real-world problems like distinguishing cats from dogs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app