Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Mar 2, 2022 • 1h 42min

#67 Prof. KARL FRISTON 2.0

We engage in a bit of epistemic foraging with Prof. Karl Friston! In this show; we discuss the free energy principle in detail, also emergence, cognition, consciousness and Karl's burden of knowledge! YT: https://youtu.be/xKQ-F2-o8uM Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud [00:00:00] Introduction to FEP/Friston [00:06:53] Cheers to Epistemic Foraging! [00:09:17] The Burden of Knowledge Across Disciplines [00:12:55] On-show introduction to Friston [00:14:23] Simple does NOT mean Easy [00:21:25] Searching for a Mathematics of Cognition [00:26:44] The Low Road and The High Road to the Principle [00:28:27] What's changed for the FEP in the last year [00:39:36] FEP as stochastic systems with a pullback attractor [00:44:03] An attracting set at multiple time scales and time infinity [00:53:56] What about fuzzy Markov boundaries? [00:59:17] Is reality densely or sparsely coupled? [01:07:00] Is a Strong and Weak Emergence distinction useful? [01:13:25] a Philosopher, a Zombie, and a Sentient Consciousness walk into a bar ...  [01:24:28] Can we recreate consciousness in silico? Will it have qualia? [01:28:29] Subjectivity and building hypotheses [01:34:17] Subject specific realizations to minimize free energy [01:37:21] Free will in a deterministic Universe The free energy principle made simpler but not too simple https://arxiv.org/abs/2201.06387
undefined
Feb 28, 2022 • 51min

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]

We have a chat with Alexander Mattick aka ZickZack from Yannic's Discord community. Alex is one of the leading voices in that community and has an impressive technical depth. Don't forget MLST has now started it's own Discord server too, come and join us! We are going to run regular events, our first big event on Wednesday 9th 1700-1900 UK time.  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud YT version: https://youtu.be/rGOOLC8cIO4 [00:00:00] Introduction to Alex  [00:02:16] Spline theory of NNs  [00:05:19] Do NNs abstract?  [00:08:27] Tim's exposition of spline theory of NNs [00:11:11] Semantics in NNs  [00:13:37] Continuous vs discrete  [00:19:00] Open-ended Search [00:22:54] Inductive logic programming [00:25:00] Control to gain knowledge and knowledge to gain control [00:30:22] Being a generalist with a breadth of knowledge and knowledge transfer [00:36:29] Causality [00:43:14] Discrete program synthesis + theorem solvers
undefined
Feb 26, 2022 • 1h 28min

#65 Prof. PEDRO DOMINGOS [Unplugged]

Note: there are no politics discussed in this show and please do not interpret this show as any kind of a political statement from us.  We have decided not to discuss politics on MLST anymore due to its divisive nature.  Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud [00:00:00] Intro [00:01:36] What we all need to understand about machine learning [00:06:05] The Master Algorithm Target Audience [00:09:50] Deeply Connected Algorithms seen from Divergent Frames of Reference [00:12:49] There is a Master Algorithm; and it's mine! [00:14:59] The Tribe of Evolution [00:17:17] Biological Inspirations and Predictive Coding [00:22:09] Shoe-Horning Gradient Descent [00:27:12] Sparsity at Training Time vs Prediction Time [00:30:00] World Models and Predictive Coding [00:33:24] The Cartoons of System 1 and System 2 [00:40:37] AlphaGo Searching vs Learning [00:45:56] Discriminative Models evolve into Generative Models [00:50:36] Generative Models, Predictive Coding, GFlowNets [00:55:50] Sympathy for a Thousand Brains [00:59:05] A Spectrum of Tribes [01:04:29] Causal Structure and Modelling [01:09:39] Entropy and The Duality of Past vs Future, Knowledge vs Control [01:16:14] A Discrete Universe? [01:19:49] And yet continuous models work so well [01:23:31] Finding a Discretised Theory of Everything
undefined
Feb 24, 2022 • 52min

#64 Prof. Gary Marcus 3.0

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/HNnAwSduud YT: https://www.youtube.com/watch?v=ZDY2nhkPZxw We have a chat with Prof. Gary Marcus about everything which is currently top of mind for him, consciousness  [00:00:00] Gary intro [00:01:25] Slightly conscious [00:24:59] Abstract, compositional models [00:32:46] Spline theory of NNs [00:36:17] Self driving cars / algebraic reasoning  [00:39:43] Extrapolation [00:44:15] Scaling laws [00:49:50] Maximum likelihood estimation References: Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets https://arxiv.org/abs/2201.02177 DEEP DOUBLE DESCENT: WHERE BIGGER MODELS AND MORE DATA HURT https://arxiv.org/pdf/1912.02292.pdf Bayesian Deep Learning and a Probabilistic Perspective of Generalization https://arxiv.org/pdf/2002.08791.pdf
undefined
Feb 22, 2022 • 1h 33min

#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them. Panel: Dr. Tim Scarfe Dr. Keith Duggar Dr. Yannic Kilcher Our special thanks to:  - Alexander Mattick (Zickzack) References: Yoshua Bengio @ MILA (https://mila.quebec/en/person/bengio-yoshua/) GFlowNet Foundations (https://arxiv.org/pdf/2111.09266.pdf) Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation (https://arxiv.org/pdf/2106.04399.pdf) Interpolation Consistency Training for Semi-Supervised Learning (https://arxiv.org/pdf/1903.03825.pdf) Towards Causal Representation Learning (https://arxiv.org/pdf/2102.11107.pdf) Causal inference using invariant prediction: identification and confidence intervals (https://arxiv.org/pdf/1501.01332.pdf)
undefined
Feb 3, 2022 • 1h 30min

#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics

Dr. Guy Emerson is a computational linguist and obtained his Ph.D from Cambridge university where he is now a research fellow and lecturer. On panel we also have myself, Dr. Tim Scarfe, as well as Dr. Keith Duggar and the veritable Dr. Walid Saba. We dive into distributional semantics, probability theory, fuzzy logic, grounding, vagueness and the grammar/cognition connection. The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? We want to learn the meanings of words from a corpus by exploiting the fact that the context of a word tells us something about its meaning. This is known as the distributional hypothesis. In his Ph.D thesis, Dr. Guy Emerson presented a distributional model which can learn truth-conditional semantics which are grounded by objects in the real world. Hope you enjoy the show! https://www.cai.cam.ac.uk/people/dr-guy-emerson https://www.repository.cam.ac.uk/handle/1810/284882?show=full https://www.semanticscholar.org/paper/Computational-linguistics-and-grammar-engineering-Bender-Emerson/bbd6f3b92a0f1ea8212f383cc4719bfe86b3588c Patreon: https://www.patreon.com/mlst
undefined
Jan 4, 2022 • 3h 20min

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst Yann LeCun thinks that it's specious to say neural network models are interpolating because in high dimensions, everything is extrapolation. Recently Dr. Randall Balestriero, Dr. Jerome Pesente and prof. Yann LeCun released their paper learning in high dimensions always amounts to extrapolation. This discussion has completely changed how we think about neural networks and their behaviour. [00:00:00] Pre-intro [00:11:58] Intro Part 1: On linearisation in NNs [00:28:17] Intro Part 2: On interpolation in NNs [00:47:45] Intro Part 3: On the curse [00:48:19] LeCun [01:40:51] Randall B YouTube version: https://youtu.be/86ib0sfdFtw
undefined
Sep 19, 2021 • 3h 33min

#60 Geometric Deep Learning Blueprint (Special Edition)

Patreon: https://www.patreon.com/mlst The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world. Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases. This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr. Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. See the table of contents for this (long) show at https://youtu.be/bIZB1hIJ4u8 
undefined
Sep 3, 2021 • 2h 35min

#59 - Jeff Hawkins (Thousand Brains Theory)

Patreon: https://www.patreon.com/mlst The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body.  Critically - Hawkins doesn’t think there is just one model but rather; thousands.  Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.  https://numenta.com/a-thousand-brains-by-jeff-hawkins/ https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/ Panel: Dr. Keith Duggar https://twitter.com/DoctorDuggar Connor Leahy https://twitter.com/npcollapse
undefined
Aug 11, 2021 • 2h 28min

#58 Dr. Ben Goertzel - Artificial General Intelligence

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots. Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field.  Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented. Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain. TOC is on the YT show description https://www.youtube.com/watch?v=sw8IE3MX1SY Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar Artificial General Intelligence: Concept, State of the Art, and Future Prospects https://sciendo.com/abstract/journals... The General Theory of General Intelligence: A Pragmatic Patternist Perspective https://arxiv.org/abs/2103.15100

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode