Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Jul 25, 2021 • 2h 31min

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. YT vid- https://www.youtube.com/watch?v=A8m1Oqz2HKc Main show kick off [00:26:51] Panel: Dr. Tim Scarfe, Dr. Keith Duggar, Letitia Parcalabescu (https://www.youtube.com/c/AICoffeeBreak/)
undefined
Jul 8, 2021 • 1h 11min

#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

It has been over three decades since the statistical revolution overtook AI by a storm and over two  decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by hybrid models – models that combine knowledge-based approaches with data-driven machine learning techniques.  Joining us on this panel discussion is polymath and linguist Walid Saba - Co-founder ONTOLOGIK.AI, Gadi Singer - VP & Director, Cognitive Computing Research, Intel Labs and J. Mark Bishop - Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London and Scientific Adviser to FACT360. Moderated by Dr. Keith Duggar and Dr. Tim Scarfe https://www.linkedin.com/in/gadi-singer/ https://www.linkedin.com/in/walidsaba/ https://www.linkedin.com/in/profjmarkbishop/ #machinelearning #artificialintelligence
undefined
Jun 21, 2021 • 1h 36min

#55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

Dr. Ishan Misra is a Research Scientist at Facebook AI Research where he works on Computer Vision and Machine Learning. His main research interest is reducing the need for human supervision, and indeed, human knowledge in visual learning systems. He finished his PhD at the Robotics Institute at Carnegie Mellon. He has done stints at Microsoft Research, INRIA and Yale. His bachelors is in computer science where he achieved the highest GPA in his cohort.  Ishan is fast becoming a prolific scientist, already with more than 3000 citations under his belt and co-authoring with Yann LeCun; the godfather of deep learning.  Today though we will be focusing an exciting cluster of recent papers around unsupervised representation learning for computer vision released from FAIR. These are; DINO: Emerging Properties in Self-Supervised Vision Transformers, BARLOW TWINS: Self-Supervised Learning via Redundancy Reduction and PAWS: Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples. All of these papers are hot off the press, just being officially released in the last month or so. Many of you will remember PIRL: Self-Supervised Learning of Pretext-Invariant Representations which Ishan was the primary author of in 2019. References; Shuffle and Learn - https://arxiv.org/abs/1603.08561 DepthContrast - https://arxiv.org/abs/2101.02691 DINO - https://arxiv.org/abs/2104.14294 Barlow Twins - https://arxiv.org/abs/2103.03230 SwAV - https://arxiv.org/abs/2006.09882 PIRL - https://arxiv.org/abs/1912.01991 AVID - https://arxiv.org/abs/2004.12943 (best paper candidate at CVPR'21 (just announced over the weekend) - http://cvpr2021.thecvf.com/node/290)   Alexei (Alyosha) Efros http://people.eecs.berkeley.edu/~efros/ http://www.cs.cmu.edu/~tmalisie/projects/nips09/   Exemplar networks https://arxiv.org/abs/1406.6909   The bitter lesson - Rich Sutton http://www.incompleteideas.net/IncIdeas/BitterLesson.html   Machine Teaching: A New Paradigm for Building Machine Learning Systems https://arxiv.org/abs/1707.06742   POET https://arxiv.org/pdf/1901.01753.pdf
undefined
Jun 4, 2021 • 2h 24min

#54 Gary Marcus and Luis Lamb - Neurosymbolic models

Professor Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. Gary said in his recent next decade paper that — without us, or other creatures like us, the world would continue to exist, but it would not be described, distilled, or understood.  Human lives are filled with abstraction and causal description. This is so powerful. Francois Chollet the other week said that intelligence is literally sensitivity to abstract analogies, and that is all there is to it. It's almost as if one of the most important features of intelligence is to be able to abstract knowledge, this drives the generalisation which will allow you to mine previous experience to make sense of many future novel situations.   Also joining us today is Professor Luis Lamb — Secretary of Innovation for Science and Technology of the State of Rio Grande do Sul, Brazil. His Research Interests are Machine Learning and Reasoning, Neuro-Symbolic Computing, Logic in Computation and Artificial Intelligence, Cognitive and Neural Computation and also AI Ethics and Social Computing. Luis released his new paper Neurosymbolic AI: the third wave at the end of last year. It beautifully articulated the key ingredients needed in the next generation of AI systems, integrating type 1 and type 2 approaches to AI and it summarises all the of the achievements of the last 20 years of research.   We cover a lot of ground in today's show. Explaining the limitations of deep learning, Rich Sutton's the bitter lesson and "reward is enough", and the semantic foundation which is required for us to build robust AI.
undefined
May 19, 2021 • 2h 18min

#53 Quantum Natural Language Processing - Prof. Bob Coecke (Oxford)

Bob Coercke is a celebrated physicist, he's been a Physics and Quantum professor at Oxford University for the last 20 years. He is particularly interested in Structure which is to say, Logic, Order, and Category Theory. He is well known for work involving compositional distributional models of natural language meaning and he is also fascinated with understanding how our brains work. Bob was recently appointed as the Chief Scientist at Cambridge Quantum Computing. Bob thinks that interactions between systems in Quantum Mechanics carries naturally over to how word meanings interact in natural language. Bob argues that this interaction embodies the phenomenon of quantum teleportation. Bob invented ZX-calculus, a graphical calculus for revealing the compositional structure inside quantum circuits - to show entanglement states and protocols in a visually succinct but logically complete way. Von Neumann himself didn't even like his own original symbolic formalism of quantum theory, despite it being widely used! We hope you enjoy this fascinating conversation which might give you a lot of insight into natural language processing.  Tim Intro [00:00:00] The topological brain (Post-record button skit) [00:13:22] Show kick off [00:19:31] Bob introduction [00:22:37] Changing culture in universities [00:24:51] Machine Learning is like electricity [00:31:50] NLP -- what is Bob's Quantum conception? [00:34:50] The missing text problem [00:52:59] Can statistical induction be trusted? [00:59:49] On pragmatism and hybrid systems [01:04:42] Parlour tricks, parsing and information flows [01:07:43] How much human input is required with Bob's method? [01:11:29] Reality, meaning, structure and language [01:14:42] Replacing complexity with quantum entanglement, emergent complexity [01:17:45] Loading quantum data requires machine learning [01:19:49]  QC is happy math coincidence for NLP [01:22:30] The Theory of English (ToE) [01:28:23]  ... or can we learn the ToE? [01:29:56]  How did diagrammatic quantum calculus come about? [01:31:04 The state of quantum computing today [01:37:49]  NLP on QC might be doable even in the NISQ era [01:40:48]  Hype and private investment are driving progress [01:48:34]  Crypto discussion (moved to post-show) [01:50:38]  Kilcher is in a startup (moved to post show) [01:53:40 Debrief [01:55:26] 
undefined
May 1, 2021 • 1h 48min

#52 - Unadversarial Examples (Hadi Salman, MIT)

Performing reliably on unseen or shifting data distributions is a difficult challenge for modern vision systems, even slight corruptions or transformations of images are enough to slash the accuracy of state-of-the-art classifiers. When an adversary is allowed to modify an input image directly, models can be manipulated into predicting anything even when there is no perceptible change, this is known an adversarial example. The ideal definition of an adversarial example is when humans consistently say two pictures are the same but a machine disagrees. Hadi Salman, a Ph.D student at MIT (ex-Uber and Microsoft Research) started thinking about how adversarial robustness  could be leveraged beyond security. He realised that the phenomenon of adversarial examples could actually be turned upside down to lead to more robust models instead of breaking them. Hadi actually utilized the brittleness of neural networks to design unadversarial examples or robust objects which_ are objects designed specifically to be robustly recognized by neural networks.  Introduction [00:00:00] DR KILCHER'S PHD HAT [00:11:18] Main Introduction [00:11:38] Hadi's Introduction [00:14:43] More robust models == transfer better [00:46:41] Features not bugs paper [00:49:13] Manifolds [00:55:51] Robustness and Transferability [00:58:00] Do non-robust features generalize worse than robust? [00:59:52] The unreasonable predicament of entangled features [01:01:57] We can only find adversarial examples in the vicinity [01:09:30] Certifiability of models for robustness [01:13:55] Carlini is coming for you! And we are screwed [01:23:21] Distribution shift and corruptions are a bigger problem than adversarial examples [01:25:34] All roads lead to generalization [01:26:47] Unadversarial examples [01:27:26]
undefined
Apr 16, 2021 • 2h 2min

#51 Francois Chollet - Intelligence and Generalisation

In today's show we are joined by Francois Chollet, I have been inspired by Francois ever since I read his Deep Learning with Python book and started using the Keras library which he invented many, many years ago. Francois has a clarity of thought that I've never seen in any other human being! He has extremely interesting views on intelligence as generalisation, abstraction and an information conversation ratio. He wrote on the measure of intelligence at the end of 2019 and it had a huge impact on my thinking. He thinks that NNs can only model continuous problems, which have a smooth learnable manifold and that many "type 2" problems which involve reasoning and/or planning are not suitable for NNs. He thinks that many problems have type 1 and type 2 enmeshed together. He thinks that the future of AI must include program synthesis to allow us to generalise broadly from a few examples, but the search could be guided by neural networks because the search space is interpolative to some extent. https://youtu.be/J0p_thJJnoo Tim's Whimsical notes; https://whimsical.com/chollet-show-QQ2atZUoRR3yFDsxKVzCbj
undefined
Apr 4, 2021 • 1h 33min

#50 Christian Szegedy - Formal Reasoning, Program Synthesis

Dr. Christian Szegedy from Google Research is a deep learning heavyweight. He invented adversarial examples, one of the first object detection algorithms, the inceptionnet architecture, and co-invented batchnorm. He thinks that if you bet on computers and software in 1990 you would have been as right as if you bet on AI now. But he thinks that we have been programming computers the same way since the 1950s and there has been a huge stagnation ever since. Mathematics is the process of taking a fuzzy thought and formalising it. But could we automate that? Could we create a system which will act like a super human mathematician but you can talk to it in natural language? This is what Christian calls autoformalisation. Christian thinks that automating many of the things we do in mathematics is the first step towards software synthesis and building human-level AGI. Mathematics ability is the litmus test for general reasoning ability. Christian has a fascinating take on transformers too.  With Yannic Lightspeed Kilcher and Dr. Mathew Salvaris Whimsical Canvas with Tim's Notes: https://whimsical.com/mar-26th-christian-szegedy-CpgGhnEYDBrDMFoATU6XYC YouTube version (with detailed table of contents) https://youtu.be/ehNGGYFO6ms
undefined
Mar 23, 2021 • 1h 25min

#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)

Dr. Tom Zahavy of DeepMind discusses reinforcement learning as a potential path to artificial general intelligence through maximising rewards. The conversation dives into the concept of meta-gradients in RL, their role in optimization algorithms, and challenges in evaluating performance. The podcast also explores deep hierarchical lifelong learning in Minecraft, the relationship between software, hardware, and ML progress, reviving concept papers, and exploring hierarchical state spaces in DQN agents.
undefined
Mar 16, 2021 • 37min

#48 Machine Learning Security - Andy Smith

First episode in a series we are doing on ML DevOps. Starting with the thing which nobody seems to be talking about enough, security! We chat with cyber security expert Andy Smith about threat modelling and trust boundaries for an ML DevOps system.  Intro [00:00:00] ML DevOps - a security perspective [00:00:50] Threat Modelling [00:03:03] Adversarial examples? [00:11:27] Nobody understands the whole stack [00:13:53] On the size of the state space, the element of unpredictability [00:18:32] Threat modelling in more detail [00:21:17] Trust boundaries for an ML DevOps system [00:25:45] Andy has a YouTube channel on cyber security! Check it out @  https://www.youtube.com/channel/UCywP24ly6h6NTusX88TQKTQ https://www.linkedin.com/in/andysmith-uk/ Video version: https://youtu.be/7Tz-3S4lypI

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode