Machine Learning Street Talk (MLST) cover image

Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Machine Learning Street Talk (MLST)

00:00

Neural Networks: Limitations and Capabilities

This chapter explores the complexities of neural networks, specifically contrasting autoregressive language models with recurrent models. It examines Turing completeness, the implications of finite resources on compositionality, and the significance of recursion in computational theories. Additionally, the discussion highlights the differences between spiking neural networks and traditional architectures, emphasizing their efficiency in temporal tasks and the potential for hybrid models.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app