
Machine Learning Street Talk (MLST)
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Latest episodes

68 snips
Jan 20, 2021 • 2h 46min
#038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned
Professor Kenneth Stanley, a research science manager at OpenAI and a key figure in neuroevolution, discusses his groundbreaking ideas on innovation and creativity. He argues that rigid objectives limit genuine progress and creativity, promoting a shift towards open-ended exploration instead. Stanley critiques conventional benchmarks and highlights how true breakthroughs often emerge from unplanned avenues. He explains the importance of fostering interestingness and autonomy in research, encouraging listeners to embrace uncertainty for greater achievements.

Jan 11, 2021 • 1h 35min
#037 - Tour De Bayesian with Connor Tann
Connor Tan is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, pratical utility, and unique capabilities of Bayesian methods. We also discuss the computational difficulties inherent in Bayesian methods along with modern methods for approximate solutions such as Markov Chain Monte Carlo. Finally, we discuss how Bayesian optimization in the context of automl may one day put Data Scientists like Connor out of work.
Panel: Dr. Keith Duggar, Alex Stenlake, Dr. Tim Scarfe
00:00:00 Duggars philisophical ramblings on Bayesianism
00:05:10 Introduction
00:07:30 small datasets and prior scientific knowledge
00:10:37 Bayesian methods are probability theory
00:14:00 Bayesian methods demand hard computations
00:15:46 uncertainty can matter more than estimators
00:19:29 updating or combining knowledge is a key feature
00:25:39 Frequency or Reasonable Expectation as the Primary Concept
00:30:02 Gambling and coin flips
00:37:32 Rev. Thomas Bayes's pool table
00:40:37 ignorance priors are beautiful yet hard
00:43:49 connections between common distributions
00:49:13 A curious Universe, Benford's Law
00:55:17 choosing priors, a tale of two factories
01:02:19 integration, the computational Achilles heel
01:35:25 Bayesian social context in the ML community
01:10:24 frequentist methods as a first approximation
01:13:13 driven to Bayesian methods by small sample size
01:18:46 Bayesian optimization with automl, a job killer?
01:25:28 different approaches to hyper-parameter optimization
01:30:18 advice for aspiring Bayesians
01:33:59 who would connor interview next?
Connor Tann: https://www.linkedin.com/in/connor-tann-a92906a1/
https://twitter.com/connossor

18 snips
Jan 3, 2021 • 1h 43min
#036 - Max Welling: Quantum, Manifolds & Symmetries in ML
This conversation features Max Welling, a prominent Professor and VP of Technology at Qualcomm, known for his innovative work in geometric deep learning. He discusses the crucial role of domain knowledge in machine learning and how inductive biases impact model predictions. The dialogue also explores the fascinating intersection of quantum computing and AI, particularly the potential of quantum neural networks. Furthermore, Welling highlights the significance of symmetries in neural networks and their applications in real-world problems, including protein folding.

Dec 27, 2020 • 2h 56min
#035 Christmas Community Edition!
Alex Mattick, a community member from Yannic Kilcher's Discord and a type theory expert, dives into the fascinating intersections of type theory and AI. They dissect cutting-edge research, including debates on neural networks as kernel machines and critiques of neural-symbolic models. The conversation highlights the importance of inductive priors and explores lambda calculus, shedding light on its vital role in programming correctness. With insights from community discussions, this chat is a treasure trove for AI enthusiasts!

Dec 20, 2020 • 2h 39min
#034 Eray Özkural- AGI, Simulations & Safety
Dr. Eray Özkural, an AGI researcher and founder of Celestial Intellect Cybernetics, critiques mainstream AI safety narratives, arguing they're rooted in fearmongering. He shares his skepticism about the intelligence explosion hypothesis and discusses the complexities of defining intelligence. The conversation also dives into the simulation argument, challenging its validity and exploring its implications. The panel covers the urgent need for nuanced approaches to AGI and the ethics surrounding AI development, urging a departure from doomsday thinking.

53 snips
Dec 13, 2020 • 1h 51min
#033 Prof. Karl Friston - The Free Energy Principle
Dive into the mind-bending world of the Free Energy Principle with a leading neuroscientist. Explore how the brain interprets ambiguous sensory data as an inference problem, moving beyond traditional optimization methods. Discover the balance between prediction accuracy and adaptability, the role of belief states, and the significance of Markov blankets in decision-making. Hear humorous takes on cultural differences in communication styles, all while contemplating the future implications of these complex concepts in cognitive science and machine learning.

Dec 6, 2020 • 1h 30min
#032- Simon Kornblith / GoogleAI - SimCLR and Paper Haul!
Simon Kornblith, a research scientist at Google Brain with a background in neuroscience, dives deep into the world of neural networks. He discusses the unique relationship between neural networks and biological brains, shedding light on how architecture affects learning. Kornblith explains the significance of loss functions in image classification and reveals insights from the SimCLR framework. He also touches on data augmentation strategies, self-supervised learning, and the programming advantages of Julia for machine learning tasks.

21 snips
Nov 28, 2020 • 2h 44min
#031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)
This conversation features Gary Marcus, a psychology and neuroscience professor, known for critiquing deep learning, alongside Waleed Sabah, an expert in natural language understanding, and Connor Leahy, a proponent of large language models. They dive into GPT-3's strengths and weaknesses, the philosophical implications of AI creativity, and the importance of integrating reasoning with pattern recognition. The dialogue also critiques AI's limitations in understanding language and explores future possibilities for achieving true artificial general intelligence.

Nov 20, 2020 • 1h 48min
#030 Multi-Armed Bandits and Pure-Exploration (Wouter M. Koolen)
Wouter M. Koolen, a Senior Researcher at Centrum Wiskunde & Informatica, delves into the fascinating world of multi-armed bandits and pure exploration. He discusses the balance between exploration and exploitation, illustrated through examples like clinical trials and game strategies. Wouter explains how to determine when to shift from learning to exploiting knowledge gained. The conversation also highlights the ethical considerations in decision-making and innovative algorithms that drive advancements in this area, making complex theories accessible for practical application.

Nov 8, 2020 • 1h 51min
#029 GPT-3, Prompt Engineering, Trading, AI Alignment, Intelligence
Connor Leahy, known for his work with EleutherAI, joins a fascinating discussion on AI, trading, and philosophy. The group delves into the potential of GPT-3 and the emerging skill of prompt engineering, arguing it could redefine software development. They explore the unpredictability of stock markets and critique the deceptive nature of quant finance. Additionally, philosophical dilemmas surrounding AI alignment and the ethical implications of technology in business are scrutinized, while pondering the complex relationship between randomness and human intelligence.