Machine Learning Street Talk (MLST)

#035 Christmas Community Edition!

Dec 27, 2020
Alex Mattick, a community member from Yannic Kilcher's Discord and a type theory expert, dives into the fascinating intersections of type theory and AI. They dissect cutting-edge research, including debates on neural networks as kernel machines and critiques of neural-symbolic models. The conversation highlights the importance of inductive priors and explores lambda calculus, shedding light on its vital role in programming correctness. With insights from community discussions, this chat is a treasure trove for AI enthusiasts!
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Neural Networks for Reasoning

  • DeepMind's new neural network surpasses neuro-symbolic models on reasoning tasks.
  • This challenges the belief that hybrid models are necessary for such tasks.
INSIGHT

Reasoning or Interpolation?

  • Yannic Kilcher suggests DeepMind's model might be using interpolation tricks, not true reasoning.
  • He proposes further experiments to investigate if the model uses intermediate quantities.
INSIGHT

Understanding and Multimodality

  • Understanding might be an illusion, unnecessary for AGI.
  • Multimodality could explain the emergence of understanding, as argued by Yannic Kilcher and Connor Leahy.
Get the Snipd Podcast app to discover more snips from this episode
Get the app