
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com
Latest episodes

Aug 5, 2022 • 56min
Laura Weidinger: Ethical Risks, Harms, and Alignment of Large Language Models
In episode 37 of The Gradient Podcast, Andrey Kurenkov speaks to Laura WeidingerLaura is a senior research scientist at DeepMind, with her focus being AI ethics. Laura is also a PhD candidate at the University of Cambridge, studying philosophy of science and specifically approaches to measuring the ethics of AI systems. Previously Laura worked in technology policy at UK and EU levels, as a Policy executive at techUK. She then pivoted to cognitive science research and studied human learning at the Max Planck Institute for Human Development in Berlin, and was a Guest Lecturer at the Ada National College for Digital Skills. She received her Master's degree at the Humboldt University of Berlin, from the School of Mind and Brain, with her focus being Neuroscience/ Philosophy/ Cognitive science.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(01:20) Path to AI(04:25) Research in Cognitive Science(06:40) Interest in AI Ethics(14:30) Ethics Considerations for Researchers(17:38) Ethical and social risks of harm from language models (25:30) Taxonomy of Risks posed by Language Models(27:33) Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models(33:25) Main Insight for Measuring Harm(35:40) The EU AI Act(39:10) Alignment of language agents(46:10) GPT-4Chan(53:40) Interests outside of AI(55:30) OutroLinks:Ethical and social risks of harm from language models Taxonomy of Risks posed by Language ModelsCharacteristics of Harmful Text: Towards Rigorous Benchmarking of Language ModelsAlignment of language agents Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jul 29, 2022 • 1h 4min
Sebastian Raschka: AI Education and Research
Sebastian Raschka, Assistant Professor of Statistics at the University of Wisconsin-Madison and Lead AI Educator at Lightning AI, discusses his AI journey, prioritizing learning, ordinal regression, and his work with Lightning AI in bridging the gap between research and real-world applications. He emphasizes the importance of visual engagement in AI education and finding a niche that excites you for producing great work.

Jul 22, 2022 • 31min
Lt. General Jack Shanahan: AI in the DoD, Project Maven, and Bridging the Tech-DoD Gap
In episode 35 of The Gradient Podcast, guest host Sharon Zhou speaks to Jack Shanahan.John (Jack) Shanahan was a Lieutenant General in the United States Air Force, retired after a 36-year military career. He was the inaugural Director of the Joint Artificial Intelligence Center (JAIC) in the U.S. Department of Defense (DoD). He was also the Director of the Algorithmic Warfare Cross-Functional Team (Project Maven). Currently, he is a Special Government Employee supporting the National Security Commission on Artificial Intelligence; serves on the Board of Advisors for the Common Mission Project; is an advisor to The Changing Character of War Centre (Oxford University); is a member of the CACI Strategic Advisory Group; and serves as an Advisor to the Military Cyber Professionals Association.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(01:20) Introduction to Jack and Sharon(07:30) Project Maven(09:45) Relationship of Tech Sector and DoD(16:40) Need for AI in DoD(20:10) Bringing the tech-DoD divide(30:00) ConclusionEpisode Links:* John N.T. Shanahan Wikipedia* AI To Revolutionize U.S. Intelligence Community With General Shanahan* Email: aidodconversations@gmail.com Get full access to The Gradient at thegradientpub.substack.com/subscribe

11 snips
Jul 14, 2022 • 53min
Sara Hooker: Cohere For AI, the Hardware Lottery, and DL Tradeoffs
In episode 34 of The Gradient Podcast, Daniel Bashir speaks to Sara Hooker.Sara (@sarahookr) leads Cohere for AI and is a former Research Scientist at Google. Sara founded a Bay Area non-profit called Delta Analytics, which works with non-profits and communities to build technical capacity. She is also one of the co-founders of the Trustworthy ML Initiative, an active participant of the ML Collective research group, and a host of the underrated ML podcast.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(02:20) Podcasting gripe-fest(06:00) Sara’s journey: from economics to AI(09:15) Economics vs. AI research(12:45) The Hardware Lottery(19:15) Towards better hardware benchmarks(26:00) Getting away from the hardware lottery(32:30) The myth of compact, interpretable, robust, performant DNNs(35:15) Top-line metrics vs. disaggregated metrics(39:20) Solving memorization in the data pipeline, noisy examples(45:35) Cohere For AIEpisode Links:* Cohere for AI* Sara’s Homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe

10 snips
Jul 7, 2022 • 47min
Lukas Biewald: Crowdsourcing at CrowdFlower and ML Tooling at Weights & Biases
In episode 33 of The Gradient Podcast, Andrey Kurenkov speaks to Lukas Biewald.Lukas Biewald is a co-founder of Weights and Biases, a company that creates developer tools for machine learning. Prior to that he was a co-founder and CEO of Figure Eight Inc. (formerly CrowdFlower) — an Internet company that collects training data for machine learning, which was sold for 300 million dollars.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:18) Start in AI* (06:17) CrowdFlower / Crowdsourcing* (21:06) Discovering Deep Learning * (25:10) Learning Deep Learning* (32:50) Weights and Biases* (37:30) State of Tooling for ML * (41:20) Exciting AI Trends* (44:42) Interests outside of AI* (45:40) OutroLinks:* Lukas’s website* Lukas’s GitHub* Starting a Second Machine Learning Tools Company, Ten Years Later* Confession of a so-called AI expert* What I learned from looking at 200 machine learning tools* CS 329S: Machine Learning Systems Design* Designing Machine Learning SystemsOpportunity at Weights & Biases: Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jun 30, 2022 • 48min
Chip Huyen: Machine Learning Tools and Systems
In episode 32 of The Gradient Podcast, Andrey Kurenkov speaks to Chip Huyen.Chip Huyen is a co-founder of Claypot AI, a platform for real-time machine learning. Previously, she was with Snorkel AI and NVIDIA. She teaches CS 329S: Machine Learning Systems Design at Stanford. She has also written four bestselling Vietnamese books, and more recently her new O’Reilly book Designing Machine Learning Systems has just come out! Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterShe also maintains a Discord server with a focus on Machine Learning Systems.Outline:* (00:00) Intro* (01:30) 3-year trip through Asia, Africa, and South America* (04:00) Getting into AI at Stanford* (11:30) Confession of a so-called AI expert* (16:40) Academia vs Industry* (17:40) Focus on ML Systems* (20:00) ML in Academia vs Industry* (28:15) Maturity of AI in Industry* (31:45) ML Tools* (37:20) Real Time ML* (43:00) ML Systems Class and BookLinks:* Chip’s website* MLOps Discord server* Confession of a so-called AI expert* What I learned from looking at 200 machine learning tools* CS 329S: Machine Learning Systems Design* Designing Machine Learning Systems Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jun 24, 2022 • 1h 37min
Preetum Nakkiran: An Empirical Theory of Deep Learning
In episode 31 of The Gradient Podcast, Daniel Bashir speaks to Preetum Nakkiran.Preetum is a Research Scientist at Apple, a Visiting Researcher at UCSD, and part of the NSF/Simons Collaboration on the Theoretical Foundations of Deep Learning. He completed his PhD at Harvard, where he co-founded the ML Foundations Group. Preetum’s research focuses on building conceptual tools for understanding learning systems.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(01:25) Getting into AI through Theoretical Computer Science (TCS)(09:08) Lack of Motivation in TCS and Learning What Research Is(12:12) Foundational vs Problem-Solving Research, Antipatterns in TCS(16:30) Theory and Empirics in Deep Learning(18:30) What is an Empirical Theory of Deep Learning(28:21) Deep Double Descent(40:00) Inductive Biases in SGD, epoch-wise double descent(45:25) Inductive Biases Stick Around(47:12) Deep Bootstrap(59:40) Distributional Generalization - Paper Rejections(1:02:30) Classical Generalization and Distributional Generalization(1:16:46) Future Work: Studying Structure in Data(1:20:51) The Tweets^TM(1:37:00) OutroEpisode Links:* Preetum’s Homepage* Preetum’s PhD Thesis Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jun 16, 2022 • 48min
Max Woolf: Data Science at BuzzFeed and AI Content Generation
In episode 30 of The Gradient Podcast, Daniel Bashir speaks to Max Woolf.Max Woolf (@minimaxir) is currently a Data Scientist at BuzzFeed in San Francisco. Some work he’s done for BuzzFeed includes using StyleGAN to create AI-generated fake boyfriends and AI-generated art quizzes. In his free time, Max creates open source Python and R software on his GitHub. More recently, Max has been developing tooling for AI content generation, such as aitextgen for easy AI text generation.Max’s projects are funded by his Patreon. If you have found anything on his website helpful, please help contribute!Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(01:20) Max’s Intro to Data Science and AI(07:00) Software Engineering in Data Science, Max’s Perspectives(09:00) Max’s Work at BuzzFeed(23:10) Scaling, Inference, Large Models(27:00) AI Content Generation(30:45) Discourse About GPT-3(34:30) AI Inventors(38:35) Fun Projects and One-Offs: AI-generated Pokémon(43:35) GPT-3-generated Discussion Topics(46:30) Advice for Data Scientists(48:10) BuzzFeed is Hiring :)(48:20) OutroEpisode Links:* Max’s Homepage* Real-World Data Science Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jun 10, 2022 • 1h 15min
Rosanne Liu: Paths in AI Research and ML Collective
In episode 29 of The Gradient Podcast, we chat with Rosanne Liu. Rosanne is a research scientist in Google Brain, and co-founder and executive director of ML Collective, a nonprofit organization for open collaboration and accessible mentorship. Before that she was a founding member of Uber AI. Outside of research, she supports underrepresented communities, and organizes symposiums, workshops, and a weekly reading group “Deep Learning: Classics and Trends” since 2018. She is currently thinking deeply how to democratize AI research even further, and improve the diversity and fairness of the field, while working on multiple fronts of machine learning research including understanding training dynamics, rethinking model capacity and scaling. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (01:30) How did you go into AI / research* (6:45) AI research: the unreasonably narrow path and how not to be miserable* (16:30) ML Collective Overview* (21:45) Deep Learning: Classics and Trends Reading Group* (26:25) More details about ML Collective* (39:35) ICLR 2022 Diversity, Equity & Inclusion* (48:00) Narrowness vs Variety in research* (57:20) Favorite Papers * (58:50) Measuring the Intrinsic Dimension of Objective Landscapes * (01:01:40) Natural Adversarial Objects * (01:03:00) Interests outside of AI - Writing* (01:08:05) Interests outside of AI - Narrating Travels with Charley* (01:13:22) Outro Get full access to The Gradient at thegradientpub.substack.com/subscribe

Jun 2, 2022 • 53min
Ben Green: "Tech for Social Good" Needs to Do More
In episode 28 of The Gradient Podcast, Daniel Bashir speaks to Ben Green, postdoctoral scholar in the Michigan Society of Fellows and Assistant Professor at the Gerald R. Ford School of Public Policy. Ben’s work focuses on the social and political impacts of government algorithms.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(02:00) Getting Started(06:15) Soul Searching(11:55) Decentering Algorithms(19:50) The Future of the City(27:25) Ethical Lip Service(32:30) Ethics Research and Industry Incentives(36:30) Broadening our Vision of Tech Ethics(47:35) What Types of Research are Valued?(52:40) OutroEpisode Links:* Ben’s Homepage* Algorithmic Realism* Special Issue of the Journal of Social Computing Get full access to The Gradient at thegradientpub.substack.com/subscribe