Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Feb 25, 2021 • 52min

#044 - Data-efficient Image Transformers (Hugo Touvron)

Hugo Touvron, a PhD student at Facebook AI Research and the primary author of the Data-efficient Image Transformers paper, shares insights on revolutionizing vision models. He explains how novel training strategies and a unique distillation token dramatically improve sample efficiency. The conversation dives into the balance of data augmentation, the implications of transformers compared to CNNs, and challenges in achieving data-driven models. Hugo also reflects on his experiences in a corporate PhD program and the future prospects of transformers in computer vision.
undefined
12 snips
Feb 19, 2021 • 1h 35min

#043 Prof J. Mark Bishop - Artificial Intelligence Is Stupid and Causal Reasoning won't fix it.

J. Mark Bishop, Professor Emeritus at Goldsmiths, University of London, criticizes the idea of AI achieving consciousness, suggesting that panpsychism posits a mind in all things. He argues that computers cannot comprehend or feel, referencing the limits of computation and the Chinese Room argument. The discussion touches on how language shapes perception, and highlights the philosophical challenges of mimicking human understanding. Bishop provocatively insists that machine intelligence will never reach the complexities of conscious experience.
undefined
Feb 11, 2021 • 1h 34min

#042 - Pedro Domingos - Ethics and Cancel Culture

Pedro Domingos, a renowned professor and author of "The Master Algorithm," dives deep into the contentious issues surrounding AI ethics and cancel culture. He critiques how cancel culture stifles necessary dialogue in machine learning, likening it to a modern form of religion. Domingos argues against ideologically driven gatekeeping in AI, cautioning that biases are often embedded in algorithmic design. He also questions the sincerity of current ethical practices in AI, advocating for a more nuanced understanding of fairness and open discourse.
undefined
Feb 3, 2021 • 1h 27min

#041 - Biologically Plausible Neural Networks - Dr. Simon Stringer

Dr. Simon Stringer, a Senior Research Fellow at Oxford University, discusses the intricate relationship between brain function and artificial intelligence. He dives into hierarchical feature binding, revealing how biologically inspired neural networks can enhance visual perception. The conversation covers the challenges of replicating human cognitive behaviors using AI and the importance of self-organization and temporal dynamics in learning. Stringer also sheds light on how insights from neuroscience can refine AI models to handle complex tasks more effectively.
undefined
Jan 31, 2021 • 1h 36min

#040 - Adversarial Examples (Dr. Nicholas Carlini, Dr. Wieland Brendel, Florian Tramèr)

Join Dr. Nicholas Carlini, a Google Brain research scientist specializing in machine learning security, Dr. Wieland Brendel from the University of Tübingen, and PhD student Florian Tramèr from Stanford as they dive into the world of adversarial examples. They explore how tiny data changes can drastically impact model predictions and discuss the inherent challenges of ensuring robust defenses in neural networks. Insights on the balance between model accuracy and security, alongside the biases present in CNNs, offer a captivating look into this crucial field of AI research.
undefined
Jan 23, 2021 • 1h 58min

#039 - Lena Voita - NLP

Lena Voita, a Ph.D. student and former research scientist at Yandex, shares her insights on NLP and machine translation. She discusses her research on the source and target contributions in neural translation models and explores information-theoretic probing using minimum description length. Lena also delves into the evolution of representations in Transformers and the complexities of language models, including challenges like hallucinations and exposure bias. Additionally, she highlights her comprehensive NLP course designed to foster deeper understanding in the field.
undefined
76 snips
Jan 20, 2021 • 2h 46min

#038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned

Professor Kenneth Stanley, a research science manager at OpenAI and a key figure in neuroevolution, discusses his groundbreaking ideas on innovation and creativity. He argues that rigid objectives limit genuine progress and creativity, promoting a shift towards open-ended exploration instead. Stanley critiques conventional benchmarks and highlights how true breakthroughs often emerge from unplanned avenues. He explains the importance of fostering interestingness and autonomy in research, encouraging listeners to embrace uncertainty for greater achievements.
undefined
Jan 11, 2021 • 1h 35min

#037 - Tour De Bayesian with Connor Tann

Connor Tan is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, pratical utility, and unique capabilities of Bayesian methods. We also discuss the computational difficulties inherent in Bayesian methods along with modern methods for approximate solutions such as Markov Chain Monte Carlo. Finally, we discuss how Bayesian optimization in the context of automl may one day put Data Scientists like Connor out of work. Panel: Dr. Keith Duggar, Alex Stenlake, Dr. Tim Scarfe 00:00:00 Duggars philisophical ramblings on Bayesianism 00:05:10 Introduction 00:07:30 small datasets and prior scientific knowledge 00:10:37 Bayesian methods are probability theory 00:14:00 Bayesian methods demand hard computations 00:15:46 uncertainty can matter more than estimators 00:19:29 updating or combining knowledge is a key feature 00:25:39 Frequency or Reasonable Expectation as the Primary Concept  00:30:02 Gambling and coin flips 00:37:32 Rev. Thomas Bayes's pool table 00:40:37 ignorance priors are beautiful yet hard 00:43:49 connections between common distributions 00:49:13 A curious Universe, Benford's Law 00:55:17 choosing priors, a tale of two factories 01:02:19 integration, the computational Achilles heel 01:35:25 Bayesian social context in the ML community 01:10:24 frequentist methods as a first approximation 01:13:13 driven to Bayesian methods by small sample size 01:18:46 Bayesian optimization with automl, a job killer? 01:25:28 different approaches to hyper-parameter optimization 01:30:18 advice for aspiring Bayesians 01:33:59 who would connor interview next? Connor Tann: https://www.linkedin.com/in/connor-tann-a92906a1/ https://twitter.com/connossor
undefined
18 snips
Jan 3, 2021 • 1h 43min

#036 - Max Welling: Quantum, Manifolds & Symmetries in ML

This conversation features Max Welling, a prominent Professor and VP of Technology at Qualcomm, known for his innovative work in geometric deep learning. He discusses the crucial role of domain knowledge in machine learning and how inductive biases impact model predictions. The dialogue also explores the fascinating intersection of quantum computing and AI, particularly the potential of quantum neural networks. Furthermore, Welling highlights the significance of symmetries in neural networks and their applications in real-world problems, including protein folding.
undefined
Dec 27, 2020 • 2h 56min

#035 Christmas Community Edition!

Alex Mattick, a community member from Yannic Kilcher's Discord and a type theory expert, dives into the fascinating intersections of type theory and AI. They dissect cutting-edge research, including debates on neural networks as kernel machines and critiques of neural-symbolic models. The conversation highlights the importance of inductive priors and explores lambda calculus, shedding light on its vital role in programming correctness. With insights from community discussions, this chat is a treasure trove for AI enthusiasts!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app