The Gradient: Perspectives on AI cover image

The Gradient: Perspectives on AI

Latest episodes

undefined
125 snips
Jan 19, 2023 • 2h 29min

Linus Lee: At the Boundary of Machine and Mind

In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee. Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of language. He builds interfaces and knowledge tools that expand the domain of thoughts we can think and qualia we can feel. Linus has been writing online since 2014–his blog boasts half a million words–and has built well over 100 side projects. He has also spent time as a software engineer at Replit, Hack Club, and Spensa, and was most recently a Researcher in Residence at Betaworks in New York. Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:00) Linus’s background and interests, vision-language models* (07:45) Embodiment and limits for text-image* (11:35) Ways of experiencing the world* (16:55) Origins of the handle “thesephist”, languages* (25:00) Math notation, reading papers* (29:20) Operations on ideas* (32:45) Overview of Linus’s research and current work* (41:30) The Oak and Ink languages, programming languages* (49:30) Personal search engines: Monocle and Reverie, what you can learn from personal data* (55:55) Web browsers as mediums for thought* (1:01:30) This AI Does Not Exist* (1:03:05) Knowledge representation and notational intelligence* Notation vs language* (1:07:00) What notation can/should be* (1:16:00) Inventing better notations and expanding human intelligence* (1:23:30) Better interfaces between humans and LMs to provide precise control, inefficiency prompt engineering* (1:33:00) Inexpressible experiences* (1:35:42) Linus’s current work using latent space models* (1:40:00) Ideas as things you can hold* (1:44:55) Neural nets and cognitive computing* (1:49:30) Relation to Hardware Lottery and AI accelerators* (1:53:00) Taylor Swift Appreciation Session, mastery and virtuosity* (1:59:30) Mastery/virtuosity and interfaces / learning curves* (2:03:30) Linus’s stories, the work of fiction* (2:09:00) Linus’s thoughts on writing* (2:14:20) A piece of writing should be focused* (2:16:15) On proving yourself* (2:28:00) OutroLinks:* Linus’s Twitter and website Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
13 snips
Jan 12, 2023 • 1h 41min

Suresh Venkatasubramanian: An AI Bill of Rights

In episode 55 of The Gradient Podcast, Daniel Bashir speaks to Professor Suresh Venkatasubramanian. Professor Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University, where his research focuses on algorithmic fairness and the impact of automated decision-making systems in society. He recently served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy, where he co-authored the Blueprint for an AI Bill of Rights.Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:25) Suresh’s journey into AI and policymaking* (08:00) The complex graph of designing and deploying “fair” AI systems* (09:50) The Algorithmic Lens* (14:55) “Getting people into a room” isn’t enough* (16:30) Failures of incorporation* (21:10) Trans-disciplinary vs interdisciplinary, the limiting nature of “my lane” / “your lane” thinking, going beyond existing scientific and philosophical ideas* (24:50) The trolley problem is annoying, its usefulness and limitations* (25:30) Breaking the frame of a discussion, self-driving doesn’t fit into the parameters of the trolley problem* (28:00) Acknowledging frames and their limitations* (29:30) Social science’s inclination to critique, flaws and benefits of solutionism* (30:30) Computer security as a model for thinking about algorithmic protections, the risk of failure in policy* (33:20) Suresh’s work on recourse* (38:00) Kantian autonomy and the value of recourse, non-Western takes and issues with individual benefit/harm as the most morally salient question* (41:00) Community as a valuable entity and its implications for algorithmic governance, surveillance systems* (43:50) How Suresh got involved in policymaking / the OSTP* (46:50) Gathering insights for the AI Bill of Rights Blueprint* (51:00) One thing the Bill did miss… Struggles with balancing specificity and vagueness in the Bill* (54:20) Should “automated system” be defined in legislation? Suresh’s approach and issues with the EU AI Act* (57:45) The danger of definitions, overlap with chess world controversies* (59:10) Constructive vagueness in law, partially theorized agreements* (1:02:15) Digital privacy and privacy fundamentalism, focus on breach of individual autonomy as the only harm vector* (1:07:40) GDPR traps, the “legacy problem” with large companies and post-hoc regulation* (1:09:30) Considerations for legislating explainability* (1:12:10) Criticisms of the Blueprint and Suresh’s responses* (1:25:55) The global picture, AI legislation outside the US, legislation as experiment* (1:32:00) Tensions in entering policy as an academic and technologist* (1:35:00) Technologists need to learn additional skills to impact policy* (1:38:15) Suresh’s advice for technologists interested in public policy* (1:41:20) OutroLinks:* Suresh is on Mastodon @geomblog@mastodon.social (and also Twitter)* Suresh’s blog* Blueprint for an AI Bill of Rights* Papers* Fairness and abstraction in sociotechnical systems* A comparative study of fairness-enhancing interventions in machine learning* The Philosophical Basis of Algorithmic Recourse* Runaway Feedback Loops in Predictive Policing Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 5, 2023 • 1h 15min

Pete Florence: Dense Visual Representations, NeRFs, and LLMs for Robotics

In episode 54 of The Gradient Podcast, Andrey Kurenkov speaks with Pete Florence.Note: this was recorded 2 months ago. Andrey should be getting back to putting out some episodes next year. Pete Florence is a Research Scientist at Google Research on the Robotics at Google team inside Brain Team in Google Research. His research focuses on topics in robotics, computer vision, and natural language -- including 3D learning, self-supervised learning, and policy learning in robotics. Before Google, he finished his PhD in Computer Science at MIT with Russ Tedrake.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00:00) Intro* (00:01:16) Start in AI* (00:04:15) PhD Work with Quadcopters* (00:08:40) Dense Visual Representations * (00:22:00) NeRFs for Robotics* (00:39:00) Language Models for Robotics* (00:57:00) Talking to Robots in Real Time* (01:07:00) Limitations* (01:14:00) OutroPapers discussed:* Aggressive quadrotor flight through cluttered environments using mixed integer programming * Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps* High-speed autonomous obstacle avoidance with pushbroom stereo* Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. (Best Paper Award, CoRL 2018)* Self-Supervised Correspondence in Visuomotor Policy Learning (Best Paper Award, RA-L 2020 )* iNeRF: Inverting Neural Radiance Fields for Pose Estimation.* NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields.* Reinforcement Learning with Neural Radiance Fields* Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language.* Inner Monologue: Embodied Reasoning through Planning with Language Models* Code as Policies: Language Model Programs for Embodied Control Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
69 snips
Dec 15, 2022 • 55min

Melanie Mitchell: Abstraction and Analogy in AI

Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell. Professor Mitchell is the Davis Professor at the Santa Fe Institute. Her research focuses on conceptual abstraction, analogy-making, and visual recognition in AI systems. She is the author or editor of six books and her work spans the fields of AI, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:20) Melanie’s intro to AI* (04:35) Melanie’s intellectual influences, AI debates over time* (10:50) We don’t have the right metrics for empirical study in AI* (15:00) Why AI is Harder than we Think: the four fallacies* (20:50) Difficulties in understanding what’s difficult for machines vs humans* (23:30) Roles for humanlike and non-humanlike intelligence* (27:25) Whether “intelligence” is a useful word* (31:55) Melanie’s thoughts on modern deep learning advances, brittleness* (35:35) Abstraction, Analogies, and their role in AI* (38:40) Concepts as analogical and what that means for cognition* (41:25) Where does analogy bottom out* (44:50) Cognitive science approaches to concepts* (45:20) Understanding how to form and use concepts is one of the key problems in AI* (46:10) Approaching abstraction and analogy, Melanie’s work / the Copycat architecture* (49:50) Probabilistic program induction as a promising approach to intelligence* (52:25) Melanie’s advice for aspiring AI researchers* (54:40) OutroLinks:* Melanie’s homepage and Twitter* Papers* Difficulties in AI, hype cycles* Why AI is Harder than we think* The Debate Over Understanding in AI’s Large Language Models* What Does It Mean for AI to Understand?* Abstraction, analogies, and reasoning* Abstraction and Analogy-Making in Artificial Intelligence* Evaluating understanding on conceptual abstraction benchmarks Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
13 snips
Dec 8, 2022 • 1h 12min

Marc Bellemare: Distributional Reinforcement Learning

Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare. Professor Bellemare leads the reinforcement learning efforts at Google Brain Montréal and is a core industry member at Mila, where he also holds the Canada CIFAR AI Chair. His PhD work, completed at the University of Alberta, proposed the use of Atari 2600 video games to benchmark progress in reinforcement learning (RL). He was a research scientist at DeepMind from 2013-2017, and his Arcade Learning Environment was very influential in DeepMind’s early RL research and remains one of the most widely-used RL benchmarks today. More recently he collaborated with Loon to deploy deep reinforcement learning to navigate stratospheric balloons. His book on distributional reinforcement learning, published by MIT Press, will be available in Spring 2023.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:10) Marc’s intro to AI and RL* (07:00) Cross-pollination of deep learning research and RL in McGill and UDM* (09:50) PhD work at U Alberta, continual learning, origins of the Arcade Learning Environment (ALE)* (14:40) Challenges in the ALE, how the ALE drove RL research* (23:10) Marc’s thoughts on the Avalon benchmark and what makes a good RL benchmark* (28:00) Opinions on “Reward is Enough” and whether RL gets us to AGI* (32:10) How Marc thinks about priors in learning, “reincarnating RL”* (36:00) Distributional Reinforcement Learning and the problem of distribution estimation* (43:00) GFlowNets and distributional RL* (45:05) Contraction in RL and distributional RL, theory-practice gaps* (52:45) Representation learning for RL* (55:50) Structure of the value function space* (1:00:00) Connections to open-endedness / evolutionary algorithms / curiosity* (1:03:30) RL for stratospheric balloon navigation with Loon* (1:07:30) New ideas for applying RL in the real world* (1:10:15) Marc’s advice for young researchers* (1:12:37) OutroLinks:* Professor Bellemare’s Homepage* Distributional Reinforcement Learning book* Papers* The Arcade Learning Environment: An Evaluation Platform for General Agents* A Distributional Perspective on Reinforcement Learning* Distributional Reinforcement Learning with Quantile Regression* Distributional Reinforcement Learning with Linear Function Approximation* Autonomous navigation of stratospheric balloons using reinforcement learning* A Geometric Perspective on Optimal Representations for Reinforcement Learning* The Value Function Polytope in Reinforcement Learning Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
27 snips
Dec 1, 2022 • 1h 29min

François Chollet: Keras and Measures of Intelligence

In episode 51 of The Gradient Podcast, Daniel Bashir speaks to François Chollet.François is a Senior Staff Software Engineer at Google and creator of the Keras deep learning library, which has enabled many people (including me) to get their hands dirty with the world of deep learning. Francois is also the author of the book “Deep Learning with Python.” Francois is interested in understanding the nature of abstraction and developing algorithms capable of autonomous abstraction and democratizing the development and deployment of AI technology, among other topics. Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro + Daniel has far too much fun pronouncing “François Chollet”* (02:00) How François got into AI* (08:00) Keras and user experience, library as product, progressive disclosure of complexity* (18:20) François’ comments on the state of ML frameworks and what different frameworks are useful for* (23:00) On the Measure of Intelligence: historical perspectives* (28:00) Intelligence vs cognition, overlaps* (32:30) How core is Core Knowledge?* (39:15) Cognition priors, metalearning priors* (43:10) Defining intelligence* (49:30) François’ comments on modern deep learning systems* (55:50) Program synthesis as a path to intelligence* (1:02:30) Difficulties on program synthesis* (1:09:25) François’ concerns about current AI* (1:14:30) The need for regulation* (1:16:40) Thoughts on longtermism* (1:23:30) Where we can expect exponential progress in AI* (1:26:35) François’ advice on becoming a good engineer* (1:29:03) OutroLinks:* François’ personal page* On the Measure of Intelligence* Keras Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
29 snips
Nov 21, 2022 • 1h 14min

Yoshua Bengio: The Past, Present, and Future of Deep Learning

Happy episode 50! This week’s episode is being released on Monday to avoid Thanksgiving. Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 50 of The Gradient Podcast, Daniel Bashir speaks to Professor Yoshua Bengio. Professor Bengio is a Full Professor at the Université de Montréal as well as Founder and Scientific Director of the MILA-Quebec AI Institute and the IVADO institute. Best known for his work in pioneering deep learning, Bengio was one of three awardees of the 2018 A.M. Turing Award along with Geoffrey Hinton and Yann LeCun. He is also the awardee of the prestigious Killam prize and, as of this year, the computer scientist with the highest h-index in the world.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:20) Journey into Deep Learning, PDP and Hinton* (06:45) “Inspired by biology”* (08:30) “Gradient Based Learning Applied to Document Recognition” and working with Yann LeCun* (10:00) What Bengio learned from LeCun (and Larry Jackel) about being a research advisor* (13:00) “Learning Long-Term Dependencies with Gradient Descent is Difficult,” why people don’t understand this paper well enough* (18:15) Bengio’s work on word embeddings and the curse of dimensionality, “A Neural Probabilistic Language Model”* (23:00) Adding more structure / inductive biases to LMs* (24:00) The rise of deep learning and Bengio’s experience, “you have to be careful with inductive biases”* (31:30) Bengio’s “Bayesian posture” in response to recent developments* (40:00) Higher level cognition, Global Workspace Theory* (45:00) Causality, actions as mediating distribution change* (49:30) GFlowNets and RL* (53:30) GFlowNets and actions that are not well-defined, combining with System II and modular, abstract ideas* (56:50) GFlowNets and evolutionary methods* (1:00:45) Bengio on Cartesian dualism* (1:09:30) “When you are famous, it is hard to work on hard problems” (Richard Hamming) and Bengio’s response* (1:11:10) Family background, art and its role in Bengio’s life* (1:14:20) OutroLinks:* Professor Bengio’s Homepage* Papers* Gradient-based learning applied to document recognition* Learning Long-Term Dependencies with Gradient Descent is Difficult* The Consciousness Prior* Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Nov 17, 2022 • 47min

Kanjun Qiu and Josh Albrecht: Generally Intelligent

In episode 49 of The Gradient Podcast, Daniel Bashir speaks to Kanjun Qiu and Josh Albrecht. Kanjun and Josh are CEO and CTO of Generally Intelligent, an AI startup aiming to develop general-purpose agents with human-like intelligence that can be safely deployed in the real world. Kanjun and Josh have played these roles together in the past as CEO and CTO of AI recruiting startup Sourceress. Kanjun is also involved with building the SF Neighborhood, and together with Josh invests in early-stage founders at Outset Capital.Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:00) Kanjun’s and Josh’s intros to AI* (06:45) How Kanjun and Josh met and started working together* (08:40) Sourceress and AI in hiring, looking for unusual candidates* (11:30) Generally Intelligent: origins and motivations* (14:55) How Kanjun and Josh think about understanding the fundamentals of intelligence* (17:20) AGI companies and long-term goals* (19:20) How Kanjun and Josh think about intelligence + Generally Intelligent’s approach-agnosticism* (22:30) Skill-acquisition efficiency* (25:18) The Avalon Environment/Benchmark* (27:40) Tasks with shared substrate* (29:00) Blending of different approaches, baseline tuning* (31:15) Approach to safety* (33:33) Issues with interpretability + ML academic practices, ablations* (36:30) Lessons about working with people, company culture* (40:00) Human focus and diversity in companies, tech environment* (44:10) Advice for potential (AI) founders* (47:05) OutroLinks:* Generally Intelligent* Avalon: A Benchmark for RL Generalization* Kanjun’s homepage* Josh’s homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
14 snips
Nov 10, 2022 • 1h 19min

Nathan Benaich: The State of AI Report

* Have suggestions for future podcast guests (or other feedback)? Let us know here!* Want to write with us? Send a pitch using this form :)In episode 48 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.Nathan is Founder and General Partner at Air Street Capital, a venture capital (VC) firm focused on investing in AI-first technology and life sciences companies. Nathan runs a number of communities focused on AI including the Research and Applied AI Summit and leads Spinout.fyi to improve the creation of university spinouts. Together with investor Ian Hogarth, Nathan co-authors the State of AI Report.Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:40) Nathan’s interests in AI, life sciences, investing* (04:10) Biotech and tech-bio companies* (08:00) Why Nathan went into VC* (10:15) Air Street Capital’s focus, investing in AI at an early stage* (14:30) Why Nathan believes in specialism over generalism in AI, balancing consumer-focused ML with serious technical work* (17:30) The European startup ecosystem* (19:30) Spinouts and inventions born in academia* (23:35) Spinout.fyi and issues with the European model* (27:50) In the UK, only 4% of private AI companies are spinouts* (30:00) Solutions* (32:55) Origins of the State of AI Report* (35:00) Looking back on Nathan’s 2021 predictions: Anthropic and JAX* (39:00) AI semiconductors and the difficult reality* (42:45) Nathan’s perspectives on AI safety/alignment* (46:00) Long-termism and debates, safety research as an input into improving capabilities* (49:50) Decentralization and the commercialization of open-source AI (Stability AI, Eleuther AI, etc.)* (53:00) Second-order applications of diffusion models—chemistry, small molecule design, genome editors* (59:00) Semiconductor restrictions and geopolitics* (1:03:45) This year’s State of AI predictions* (1:04:30) Trouble in semiconductor startup land* (1:08:40) Predictions for AGI startups* (1:14:20) How regulation of AGI startups might look* (1:16:40) Nathan’s advice for founders, investors, and researchers* (1:19:00) OutroLinks:* State of AI Report* Air Street Capital* Spinout.fyi* Rewriting the European spinout playbook* Other sources mentioned* Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy* Choking Off China’s Access to the Future of AI* China's New AI Governance Initiatives Shouldn't Be Ignored Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Nov 3, 2022 • 1h 7min

Matt Sheehan: China's AI Strategy and Governance

* Have suggestions for future podcast guests (or other feedback)? Let us know here!* Want to write with us? Send a pitch using this form :)In episode 47 of The Gradient Podcast, Daniel Bashir speaks to Matt Sheehan.Matt is a fellow at the Carnegie Endowment for International Peace, where he researches global technology with a focus on China. His writing and research explores China’s AI ecosystem, the future of China’s technology policy, and technology’s role in China’s political economy. Matt has also written for Foreign Affairs andThe Huffington Post, among other venues.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:28) Matt’s path to analyzing China’s AI governance* (06:50) Matt’s experience understanding daily life in China and developing a bottom-up perspective* (09:40) The development of government constraints in technology/AI in the US and China* (12:40) Matt’s take on China’s priorities and motivations* (17:00) How recent history influences China’s technology ambitions* (17:30) Matt gives an overview of the Century of Humiliation* (22:07) Adversarial perceptions, Xi Jinping’s brashness and its effect on discourse about International Relations, how this intersects with AI* (24:40) Self-reliance and semiconductors. Was the recent chip ban the right move?* (36:15) Matt’s question: could foundation models be trained on trailing edge chips if necessary? Limitations* (38:30) Silicon Valley and China, The Transpacific Experiment and stories* (46:17) 躺平 and how trends among youth in China interact with tech development, parallel trends in the US, work culture* (51:05) China’s recent AI governance initiatives* (56:25) Squaring China’s AI ethics stance with its use of AI* (59:53) The US can learn from both Chinese and European regulators* (1:02:03) How technologists should think about geopolitics and national tensions* (1:05:43) OutroLinks:* Matt’s Twitter* China’s influences/ambitions* Beijing’s Industrial Internet Ambitions* Beijing’s Tech Ambitions: What Exactly Does It Want?* US-China exchange and US responses* Who benefits from American AI research in China?* Two New Tech Bills Could Transform US Innovation* Fear of Chinese Competition Won’t Preserve US Tech Leadership* China’s tech standards, government initiatives and regulation in AI* How US businesses view China’s growing influence in tech standards* Three takeaways from China’s new standards strategy* China’s new AI governance initiatives shouldn’t be ignored* Semiconductors* Biden’s Unprecedented Semiconductor Bet (a new piece from Matt!)* Choking Off China’s Access to the Future of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app