Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Nov 1, 2020 • 2h 5min

AI Alignment & AGI Fire Alarm - Connor Leahy

This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI. Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them. 00:00:00 Introduction to AI alignment and AGI fire alarm  00:15:16 Main Show Intro  00:18:38 Different schools of thought on AI safety  00:24:03 What is intelligence?  00:25:48 AI Alignment  00:27:39 Humans dont have a coherent utility function  00:28:13 Newcomb's paradox and advanced decision problems  00:34:01 Incentives and behavioural economics  00:37:19 Prisoner's dilemma  00:40:24 Ayn Rand and game theory in politics and business  00:44:04 Instrumental convergence and orthogonality thesis  00:46:14 Utility functions and the Stop button problem  00:55:24 AI corrigibality - self alignment  00:56:16 Decision theory and stability / wireheading / robust delegation  00:59:30 Stop button problem  01:00:40 Making the world a better place  01:03:43 Is intelligence a search problem?  01:04:39 Mesa optimisation / humans are misaligned AI  01:06:04 Inner vs outer alignment / faulty reward functions  01:07:31 Large corporations are intelligent and have no stop function  01:10:21 Dutch booking / what is rationality / decision theory  01:16:32 Understanding very powerful AIs  01:18:03 Kolmogorov complexity  01:19:52 GPT-3 - is it intelligent, are humans even intelligent?  01:28:40 Scaling hypothesis  01:29:30 Connor thought DL was dead in 2017  01:37:54 Why is GPT-3 as intelligent as a human  01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table  01:50:28 AI ethics related to AI alignment?  01:53:26 Interpretability  01:56:27 Regulation  01:57:54 Intelligence explosion  Discord: https://discord.com/invite/vtRgjbM EleutherAI: https://www.eleuther.ai Twitter: https://twitter.com/npcollapse LinkedIn: https://www.linkedin.com/in/connor-j-leahy/
undefined
Oct 28, 2020 • 1h 27min

Kaggle, ML Community / Engineering (Sanyam Bhutani)

Join Dr Tim Scarfe, Sayak Paul, Yannic Kilcher, and Alex Stenlake have a conversation with Mr. Chai Time Data Science; Sanyam Bhutani! 00:00:00 Introduction  00:03:42 Show kick off  00:06:34 How did Sanyam get started into ML  00:07:46 Being a content creator  00:09:01 Can you be self taught without a formal education in ML?  00:22:54 Kaggle  00:33:41 H20 product / job  00:40:58 Intepretability / bias / engineering skills  00:43:22 Get that first job in DS  00:46:29 AWS ML Ops architecture / ml engineering  01:14:19 Patterns  01:18:09 Testability  01:20:54 Adversarial examples  Sanyam's blog -- https://sanyambhutani.com/tag/chaitimedatascience/ Chai Time Data Science -- https://www.youtube.com/c/ChaiTimeDataScience
undefined
Oct 20, 2020 • 1h 31min

Sara Hooker - The Hardware Lottery, Sparsity and Fairness

Dr. Tim Scarfe, Yannic Kilcher and Sayak Paul chat with Sara Hooker from the Google Brain team! We discuss her recent hardware lottery paper, pruning / sparsity, bias mitigation and intepretability.  The hardware lottery -- what causes inertia or friction in the marketplace of ideas? Is there a meritocracy of ideas or do the previous decisions we have made enslave us? Sara Hooker calls this a lottery because she feels that machine learning progress is entirely beholdant to the hardware and software landscape. Ideas succeed if they are compatible with the hardware and software at the time and also the existing inventions. The machine learning community is exceptional because the pace of innovation is fast and we operate largely in the open, this is largely because we don't build anything physical which is expensive, slow and the cost of being scooped is high. We get stuck in basins of attraction based on our technology decisions and it's expensive to jump outside of these basins. So is this story unique to hardware and AI algorithms or is it really just the story of all innovation? Every great innovation must wait for the right stepping stone to be in place before it can really happen. We are excited to bring you Sara Hooker to give her take.  YouTube version (including TOC): https://youtu.be/sQFxbQ7ade0 Show notes; https://drive.google.com/file/d/1S_rHnhaoVX4Nzx_8e3ESQq4uSswASNo7/view?usp=sharing Sara Hooker page; https://www.sarahooker.me
undefined
Oct 11, 2020 • 1h 16min

The Social Dilemma Part 3 - Dr. Rebecca Roache

This week join Dr. Tim Scarfe, Yannic Kilcher, and Keith Duggar have a conversation with Dr. Rebecca Roache in the last of our 3-part series on the social dilemma Netflix film. Rebecca is a senior lecturer in philosophy at Royal Holloway, university of London and has written extensively about the future of friendship.  People claim that friendships are not what they used to be. People are always staring at their phones, even when in public  Social media has turned us into narcissists who are always managing our own PR rather than being present with each other. Anxiety about the negative effects of technology are as old as the written word. Is technology bad for friendships? Can you have friends through screens? Does social media cause polarization? And is that a bad thing? Does it promote quantity over quality? Rebecca thinks that social media and echo chambers are less ominous to friendship on closer inspection.  00:00:32 Teaser clip from Rebecca and her new manuscript on friendship 00:02:52 Introduction  00:04:56 Memorisation vs reasoning / is technology enhancing friendships  00:09:29 Word of warcraft / gaming communities / echo chambers / polarisation  00:12:34 Horizontal vs Vertical social attributes  00:17:18 Exclusion of others opinions  00:20:36 The power to silence others / truth verification  00:23:58 Misinformation  00:27:28 Norms / memes / political terms and co-opting / bullying  00:31:57 Redefinition of political terms i.e. racism  00:36:13 Virtue signalling  00:38:57 How many friends can you have / spread thin / Dunbars 150  00:42:54 Is it morally objectionable to believe or contemplate objectionable ideas, punishment  00:50:52 Is speaking the same thing as acting   00:52:24 Punishment - deterrence vs retribution / historical  00:53:59 Yannic: contemplating is a form of speaking  00:57:32 silencing/blocking is intellectual laziness - what ideas are we allowed to talk about  01:04:53 Corporate AI ethics frameworks  01:09:14 Autonomous Vehicles  01:10:51 the eternal Facebook world / online vs offline friendships  01:14:05 How do we get the best out of our online friendships 
undefined
Oct 6, 2020 • 1h 46min

The Social Dilemma - Part 2

This week on Machine Learning Street Talk, Dr. Tim Scarfe, Dr. Keith Duggar, Alex Stenlake and Yannic Kilcher have a conversation with the founder and principal researcher at the Montreal AI Ethics institute -- Abhishek Gupta. We cover several topics from the Social Dilemma film and AI Ethics in general.  00:00:00 Introduction 00:03:57 Overcome our weaknesses 00:14:30 threat landscape blind spots   00:18:35 differential reality vs universal shaping   00:24:21 shared reality incentives and tools   00:32:01 transparency and knowledge to avoid pathology   00:40:09 federated informed autonomy     00:49:48 diversity is a metric, inclusion is a strategy   00:59:58 locally aligned pockets can stabilize global diversity  01:10:58 making inclusion easier with tools  01:23:35 enabling community feedback   01:26:16 open source the algorithms   01:33:02 the N+1 cost of inclusion   01:38:08 broader impact statement https://atg-abhishek.github.io https://www.linkedin.com/in/abhishekguptamcgill/
undefined
Oct 3, 2020 • 1h 7min

The Social Dilemma - Part 1

In this first part of our three part series on the Social Dilemma Netflix film, Dr. Tim Scarfe, Yannic "Lightspeed" Kilcher and Zak Jost gang up with Cybersecurity expert Andy Smith. We give you our take on the film. We are super excited to get your feedback on this one! Hope you enjoy.    00:00:00 Introduction 00:06:11 Moral hypocrisy   00:12:38 Road to hell is paved with good intentions, attention economy 00:15:04 They know everything about you 00:18:02 Addiction 00:21:22 Differential realities 00:26:12 Self determination and Monetisation 00:29:08 AI: Overwhelm human strengths undermine human vulnerabilities 00:31:51 Conspiracy theory / fake news 00:34:23 Overton window / polarisation 00:39:12 Short attention span / convergent behaviour 00:41:26 Is social media good for you 00:45:17 Your attention time is linear, the things you can pay attention to are a volume, anonymity  00:51:32 Andy question on security: social engineering 00:56:32 Is it a security risk having your information in social media 00:58:02 Retrospective judgement 01:03:06 Free speech and censorship  01:06:06 Technology accelerator
undefined
Sep 29, 2020 • 1h 24min

Capsule Networks and Education Targets

In today's episode, Dr. Keith Duggar, Alex Stenlake and Dr. Tim Scarfe chat about the education chapter in Kenneth Stanley's "Greatness cannot be planned" book, and we relate it to our Algoshambes conversation a few weeks ago. We debate whether objectives in education are a good thing and whether they cause perverse incentives and stifle creativity and innovation. Next up we dissect capsule networks from the top down! We finish off talking about fast algorithms and quantum computing. 00:00:00 Introduction 00:01:13 Greatness cannot be planned / education  00:12:03 Perverse incentives 00:19:25 Treasure hunting  00:30:28 Capsule Networks 00:46:08 Capsules As Compositional Networks 00:52:45 Capsule Routing 00:57:10 Loss and Warps 01:09:55 Fast Algorithms and Quantum Computing
undefined
Sep 25, 2020 • 1h 24min

Programming Languages, Software Engineering and Machine Learning

This week Dr. Tim Scarfe, Dr. Keith Duggar, Yannic "Lightspeed" Kilcher have a conversation with Microsoft Senior Software Engineer Sachin Kundu. We speak about programming languages including which our favourites are and functional programming vs OOP. Next we speak about software engineering and the intersection of software engineering and machine learning. We also talk about applications of ML and finally what makes an exceptional software engineer and tech lead. Sachin is an expert in this field so we hope you enjoy the conversation! Spoiler alert, how many of you have read the Mythical Man-Month by Frederick P. Brooks?!   00:00:00 Introduction 00:06:37 Programming Languages 00:53:41 Applications of ML 01:55:59 What makes an exceptional SE and tech lead 01:22:08 Outro 
undefined
Sep 22, 2020 • 1h 14min

Computation, Bayesian Model Selection, Interactive Articles

This week Dr. Keith Duggar, Alex Stenlake and Dr. Tim Scarfe discuss the theory of computation, intelligence, Bayesian model selection, the intelligence explosion and the the phenomenon of "interactive articles".  00:00:00 Intro 00:01:27 Kernels and context-free grammars 00:06:04 Theory of computation 00:18:41 Intelligence 00:22:03 Bayesian model selection 00:44:05 AI-IQ Measure / Intelligence explosion 00:52:09 Interactive articles 01:12:32 Outro
undefined
Sep 18, 2020 • 1h 37min

Kernels!

Today Yannic Lightspeed Kilcher and I spoke with Alex Stenlake about Kernel Methods. What is a kernel? Do you remember those weird kernel things which everyone obsessed about before deep learning? What about Representer theorem and reproducible kernel hilbert spaces? SVMs and kernel ridge regression? Remember them?! Hope you enjoy the conversation! 00:00:00 Tim Intro 00:01:35 Yannic clever insight from this discussion  00:03:25 Street talk and Alex intro  00:05:06 How kernels are taught 00:09:20 Computational tractability 00:10:32 Maths  00:11:50 What is a kernel?  00:19:39 Kernel latent expansion  00:23:57 Overfitting  00:24:50 Hilbert spaces  00:30:20 Compare to DL 00:31:18 Back to hilbert spaces 00:45:19 Computational tractability 2 00:52:23 Curse of dimensionality 00:55:01 RBF: infinite taylor series 00:57:20 Margin/SVM  01:00:07 KRR/dual 01:03:26 Complexity compute kernels vs deep learning 01:05:03 Good for small problems? vs deep learning) 01:07:50 Whats special about the RBF kernel 01:11:06 Another DL comparison 01:14:01 Representer theorem 01:20:05 Relation to back prop 01:25:10 Connection with NLP/transformers 01:27:31 Where else kernels good 01:34:34 Deep learning vs dual kernel methods 01:33:29 Thoughts on AI 01:34:35 Outro

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode