The Gradient: Perspectives on AI cover image

The Gradient: Perspectives on AI

Latest episodes

undefined
Nov 11, 2021 • 1h 11min

Alex Tamkin on Self-Supervised Learning and Large Language Models

In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex TamkinSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSAlex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.We discuss:* Viewmaker Networks: Learning Views for Unsupervised Representation Learning* DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning* On the Opportunities and Risks of Foundation Models* Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models* Mentoring, teaching and fostering a healthy and inclusive research culture* Scientific communication and breaking down walls between fieldsPodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music" Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Oct 28, 2021 • 1h 29min

Peter Henderson on RL Benchmarking, Climate Impacts of AI, and AI for Law

In episode 14 of The Gradient Podcast, we interview Stanford PhD Candidate Peter HendersonSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSPeter is a joint JD-PhD student at Stanford University advised by Dan Jurafsky. He is also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. His research focuses on creating robust decision-making systems, with three main goals: (1) use AI to make governments more efficient and fair; (2) ensure that AI isn’t deployed in ways that can harm people; (3) create new ML methods for applications that are beneficial to society.Links:* Reproducibility and Reusability in Deep Reinforcement Learning. * Benchmark Environments for Multitask Learning in Continuous Domains* Reproducibility of Bench-marked Deep Reinforcement Learning Tasks for Continuous Control.* Deep Reinforcement Learning that Matters* Reproducibility and Replicability in Deep Reinforcement Learning (and Other Deep Learning Methods)* Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning* How blockers can turn into a paper: A retrospective on 'Towards The Systematic Reporting of the Energy and Carbon Footprints of Machine Learning* When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset”* How US law will evaluate artificial intelligence for Covid-19Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music" Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Oct 14, 2021 • 50min

Chelsea Finn on Meta Learning & Model Based Reinforcement Learning

In episode 13 of The Gradient Podcast, we interview Stanford Professor Chelsea FinnSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSChelsea is an Assistant Professor at Stanford University. Her lab, IRIS, studies intelligence through robotic interaction at scale, and is affiliated with SAIL and the Statistical ML Group. I also spend time at Google as a part of the Google Brain team. Her research deals with the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction.Links:* Learning to Learn with Gradients* Visual Model-Based Reinforcement Learning as a Path towards Generalist Robots* RoboNet: A Dataset for Large-Scale Multi-Robot Learning* Greedy Hierarchical Variational Autoencoders for Large-Scale Video* Example-Driven Model-Based Reinforcement Learning for Solving Long-Horizon Visuomotor Tasks   Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Oct 1, 2021 • 55min

Devi Parikh on Generative Art & AI for Creativity

In episode 12 of The Gradient Podcast, we interview Devi Parikh, a professor at Georgia Tech whose research focuses on computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity.Devi Parikh is an Associate Professor in the School of Interactive Computing at Georgia Tech, and a Research Scientist at Facebook AI Research (FAIR). Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. In the past, she has also been an Assistant Professor at Virginia Tech and a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC). She received her M.S. and Ph.D. degrees from the Electrical and Computer Engineering department at Carnegie Mellon University in 2007 and 2009 respectively. Links:* Humans of AI Podcast* Feel The Music: Automatically Generating A Dance For An Input Song* Exploring Crowd Co-creation Scenarios for Sketches* Neuro-Symbolic Generative Art: A Preliminary Study* Creative Sketch GenerationSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSPodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Sep 16, 2021 • 54min

Sergey Levine on Robot Learning & Offline RL

In episode 11 of The Gradient Podcast, we interview Sergey Levine, a professor at Berkeley whose research focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms for robotics.Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms, and includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSPodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Sep 9, 2021 • 58min

Jeremy Howard on Kaggle, Enlitic, and fast.ai

Jeremy Howard, data scientist and entrepreneur, discusses his journey from Kaggle to founding fast.ai, democratizing AI learning, empowering domain experts with deep learning, and the benefits of project-based learning in computer science education. He highlights the transformative impact of fast AI courses on learners and the joy of fatherhood.
undefined
Sep 3, 2021 • 1h 15min

Evan Hubinger on Effective Altruism and AI Safety

In episode 9 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSEvan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple andYelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Aug 27, 2021 • 41min

Yannic Kilcher on Being an AI Researcher and Educator

In episode 8 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSYannic graduated with his PhD from ETH Zurich’s data analytics lab and is now the Chief Technology Officer of DeepJudge, a company building the next-generation AI-powered context-sensitive legal document processing platform. He famously produces videos on his very popular Youtube channel, which cover machine learning research papers, programming, and issues of the AI community, and the broader impact of AI in society.Check out his Youtube channel here and follow him on Twitter herePodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Aug 19, 2021 • 47min

Alexander Veysov on Self-Teaching AI and Creating Open Speech-To-Text

In episode 7 of The Gradient Podcast, we interview founder and owner of Silero Alexander Veysov. You can find a transcript of our conversation here, and the repositories for Open Speech To Text and Silero Models here.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSAlexander Veysov is the founder / owner of Silero, a small company building Speech / NLP enabled products, and author of Open STT. Silero has recently shipped its own Russian STT engine. Previously he worked in a then Moscow-based VC firm and Ponominalu.ru, a ticketing startup acquired by MTS (major Russian TelCo). He received his BA and MA in Economics in Moscow State University for International Relations (MGIMO). You can follow his channel in telegram (@snakers41).Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Aug 5, 2021 • 56min

Yann LeCun on his Start in Research and Self-Supervised Learning

In episode 6 of The Gradient Podcast, we interview Deep Learning pioneer Yann LeCun.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterYann LeCun is the VP & Chief AI Scientist at Facebook and Silver Professor at NYU and he was also the founding Director of Facebook AI Research and of the NYU Center for Data Science. He famously pioneered the use of Convolutional Neural Nets for image processing in the 80s and 90s, and is generally regarded as one of the people whose work was pivotal to the Deep Learning revolution in AI. In fact he is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode