

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]
Feb 28, 2022
Join Alexander Mattick, a prominent voice in Yannic's Discord community and an AI aficionado, as he dives deep into the intricacies of neural networks. He reveals fascinating insights on spline theory and the complexities of abstraction in machine learning. The discussion also touches on the balance between exploration and control in knowledge acquisition, alongside the philosophical implications of causality and discrete versus continuous modeling. Alex champions the value of a broad knowledge base, illustrating how diverse insights can enhance problem-solving.
AI Snips
Chapters
Transcript
Episode notes
Neural Network Extrapolation
- Neural networks extrapolate in some way, and their behavior outside training data needs explanation.
- The concept of cutting space into regions/planes offers an explanation, suggesting a piecewise rather than smooth model.
Abstraction in MLPs
- MLPs might not learn increasing abstractions as commonly thought, but rather slice the ambient space.
- This challenges the idea of neural networks learning complex features like faces as depicted by tools like the OpenAI Microscope.
Neural Networks vs. Human Brains
- While neural networks and human brains both process information, their responses to adversarial examples differ significantly.
- A small change like a rainbow pixel can drastically alter a network's classification, unlike human perception.