The Gradient: Perspectives on AI

Daniel Bashir
undefined
Jun 2, 2022 • 53min

Ben Green: "Tech for Social Good" Needs to Do More

In episode 28 of The Gradient Podcast, Daniel Bashir speaks to Ben Green, postdoctoral scholar in the Michigan Society of Fellows and Assistant Professor at the Gerald R. Ford School of Public Policy. Ben’s work focuses on the social and political impacts of government algorithms.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(02:00) Getting Started(06:15) Soul Searching(11:55) Decentering Algorithms(19:50) The Future of the City(27:25) Ethical Lip Service(32:30) Ethics Research and Industry Incentives(36:30) Broadening our Vision of Tech Ethics(47:35) What Types of Research are Valued?(52:40) OutroEpisode Links:* Ben’s Homepage* Algorithmic Realism* Special Issue of the Journal of Social Computing Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
May 26, 2022 • 1h 23min

Max Braun: Teaching Robots to Help People in their Everyday Lives

In episode 27 of The Gradient Podcast, Andrey Kurenkov speaks to Max Braun, who leads the AI and robotics software engineering team at Everyday Robots, a moonshot to create robots that can learn to help people in their everyday lives. Previously, he worked on building frontier technology products as an entrepreneur and later at Google and X. Max enjoys exploring the intersection of art, technology, and philosophy as a writer and designer. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:00) Start in AI* (5:45) Humanoid Research in Osaka* (8:45) Joining Google X* (12:15) Visual Search and Google Glass* (15:58) Academia Industry Connection* (18:45) Overview of Robotics Vision* (26:00) Machine Learning for Robotics* (32:00) Robot Platform* (38:00) Development Process and History* (43:35) QT-Opt* (49:05) Imitation Learning* (55:00) Simulation Platform* (59:45) Sim2Real* (1:07:00) SayCan* (1:14:30) Current Objectives* (1:17:00) Other Projects* (1:21:40) OutroEpisode Links:* Max Braun’s Website* Everyday Robots* Simulating Artificial Muscles for Controlling a Robotic Arm with Fluctuation* Introducing the Everyday Robot Project* Scalable Deep Reinforcement Learning from Robotic Manipulation (QT-Opt)* Alphabet is putting its prototype robots to work cleaning up around Google’s offices* Everyday robots are (slowly) leaving the lab* Can Robots Follow Instructions for New Tasks?* Efficiently Initializing Reinforcement Learning With Prior Policies* Combining RL + IL at Scale* Shortening the Sim to Real Gap* Action-Image: Teaching Grasping in Sim* SayCan* I Made an AI Read Wittgenstein, Then Told It to Play Philosopher Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
May 19, 2022 • 1h 16min

Yejin Choi: Teaching Machines Common Sense and Morality

In episode 26 of The Gradient Podcast, Daniel Bashir speaks to Yejin Choi, professor of Computer Science at the University of Washington, and senior research manager at the Allen Institute for Artificial Intelligence.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(01:42) Getting Started in the Winter(09:17) Has NLP lost its way?(12:57) The Mosaic Project, Commonsense Intelligence(18:20) A Priori Intuitions and Common Sense in Machines(21:35) Abductive Reasoning(24:49) Benchmarking Common Sense(33:00) DeLorean and COMET - Algorithms for Commonsense Reasoning(43:30) Positive and Negative uses of Commonsense Models(49:40) Moral Reasoning(57:00) Descriptive Morality, Meta-Ethical Concerns(1:04:30) Potential Misuse(1:12:15) Future Work(1:16:23) OutroEpisode Links:* Yejin’s Homepage* The Curious Case of Commonsense Intelligence in Daedalus* Common Sense Comes Close to Computers in Quanta* Can Computers Learn Common Sense? in The New Yorker Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
May 12, 2022 • 52min

David Chalmers on AI and Consciousness

In episode 25 of The Gradient Podcast, Daniel Bashir speaks to David Chalmers, professor of philosophy and Philosophy and Neural Science at New York University, and co-director of NYU’s center for Mind, Brain, and Consciousness. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(00:42) “Today’s neural networks may be slightly conscious”(03:55) Openness to Machine Consciousness(09:37) Integrated Information Theory(18:41) Epistemic Gaps, Verbal Reports(25:52) Vision Models and Consciousness(33:37) Reasoning about Consciousness(38:20) Illusionism(41:30) Best Approaches to the Hard Problem(44:21) Panpsychism(46:35) OutroEpisode Links:* Chalmers’ Homepage* Facing Up to the Hard Problem of Consciousness (1995)* Reality+: Virtual Worlds and the Problems of Philosophy* Amanda Askell on AI Consciousness Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Apr 28, 2022 • 1h 6min

Greg Yang on Communicating Research, Tensor Programs, and µTransfer

In episode 24 of The Gradient Podcast, Daniel Bashir talks to Greg Yang, senior researcher at Microsoft Research. Greg Yang’s Tensor Programs framework recently received attention for its role in the µTransfer paradigm for tuning the hyperparameters of large neural networks. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterSections:(00:00) Intro(01:50) Start in AI / Research(05:55) Fear of Math in ML(08:00) Presentation of Research(17:35) Path to MSR(21:20) Origin of Tensor Programs(26:05) Refining TP’s Presentation(39:55) The Sea of Garbage (Initializations) and the Oasis(47:44) Scaling Up Further(55:53) On Theory and Practice in Deep Learning(01:05:28) OutroEpisode Links:* Greg’s Homepage* Greg’s Twitter* µP GitHub* Visual Intro to Gaussian Processes (Distill) Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Mar 24, 2022 • 1h 3min

Nick Walton on AI Dungeon and the Future of AI in Games

In the 23rd interview of The Gradient Podcast, we talk to Nick Walton, the CEO and Co-Founder of Latitude, the goal of which is to make AI a tool of freedom and creativity for everyone, and which is currently developing AI Dungeon and Voyage. Subscribe to The Gradient Podcast:  * Apple Podcasts* Spotify * Pocket Casts * RSSOutline:(00:00) Intro(01:38) How did you go into AI / research(3:50) Origin of AI Dungeon(8:15) What is a Dungeon Master(12:!5) Brief history of AI Dungeon(17:30) AI in videogames, past and future(23:35) Early days of AI Dungeon(29:45) AI Dungeon as a Creative Tool(33:50) Technical Aspects of AI Dungeon(39:15) Voyage(48:27) Visuals in AI Dungeon(50:45) How to Control AI in Games(55:38) Future of AI in Games(57:50) Funny stories(59:45) Interests / Hobbies(01:01:45) Outro Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Feb 3, 2022 • 0sec

Connor Leahy on EleutherAI, Replicating GPT-2/GPT-3, AI Risk and Alignment

In episode 22 of The Gradient Podcast, we talk to Connor Leahy, an AI researcher focused on AI alignment and a co-founder of EleutherAI.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterConnor is an AI researcher working on understanding large ML models and aligning them to human values, and a cofounder of EleutherAI, a decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open source AI research. The organization's flagship project is the GPT-Neo family of models designed to match those developed by OpenAI as GPT-3.Sections:(00:00:00) Intro(00:01:20) Start in AI(00:08:00) Being excited about GPT-2 (00:18:00) Discovering AI safety and alignment(00:21:10) Replicating GPT-2 (00:27:30) Deciding whether to relese GPT-2 weights(00:36:15) Life after GPT-2 (00:40:05) GPT-3 and Start of Eleuther AI(00:44:40) Early days of Eleuther AI(00:47:30) Creating the Pile, GPT-Neo, Hacker Culture(00:55:10) Growth of Eleuther AI, Cultivating Community(01:02:22) Why release a large language model(01:08:50) AI Risk and Alignment(01:21:30) Worrying (or not) about Superhuman AI(01:25:20) AI alignment and releasing powerful models(01:32:08) AI risk and research norms(01:37:10) Work on GPT-3 replication, GPT-NeoX(01:38:48) Joining Eleuther AI(01:43:28) Personal interests / hobbies(01:47:20) OutroLinks to things discussed:* Replicating GPT2–1.5B , GPT2, Counting Consciousness and the Curious Hacker* The Hacker Learns to Trust* The Pile* GPT-Neo* GPT-J* Why Release a Large Language Model?* What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective* GPT-NeoX Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
4 snips
Jan 27, 2022 • 51min

Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility

In interview 21 of The Gradient Podcast, we talk to Percy Liang, an Associate Professor of Computer Science at Stanford University and the director of the Center for Research on Foundation Models.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterPercy Liang’s research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning.  He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets.  His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.Sections:(00:00) Intro(01:21) Start in AI(06:52) Interest in Language(10:17) Start of PhD(12:22) Semantic Parsing(17:49) Focus on ML robustness(22:30) Foundation Models, model robustness(28:55) Foundation Model bias(34:48) Foundation Model research by academia(37:13) Current research interests(39:40) Surprising robustness results(44:24) Reproducibility and CodaLab(50:17) OutroPapers / Topics discussed:* On the Opportunities and Risks of Foundation Models* Reflections on Foundation Models* Removing spurious features can hurt accuracy and affect groups disproportionately.* Selective classification can magnify disparities across groups * Just train twice: improving group robustness without training group information * LILA: language-informed latent actions * CodaLab Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 8, 2022 • 1h 33min

Eric Jang on Robots Learning at Google and Generalization via Language

In episode 20 of The Gradient Podcast, we talk to Eric Jang, a research scientist on the Robotics team at Google.Eric is a research scientist on the Robotics team at Google. His research focuses on answering whether big data and small algorithms can yield unprecedented capabilities in the domain of robotics, just like the computer vision, translation, and speech revolutions before it. Specifically, he focuses on robotic manipulation and self-supervised robotic learning.Sections:(00:00) Intro(00:50) Start in AI / Research(03:58) Joining Google Robotics(10:08) End to End Learning of Semantic Grasping(19:11) Off Policy RL for Robotic Grasping(29:33) Grasp2Vec(40:50) Watch, Try, Learn Meta-Learning from Demonstrations and Rewards(50:12) BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning(59:41) Just Ask for Generalization(01:09:02) Data for Robotics(01:22:10) To Understand Language is to Understand Generalization (01:32:38) OutroPapers discussed:* Grasp2Vec: Learning Object Representations from Self-Supervised Grasping* End-to-End Learning of Semantic Grasping* Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods* Watch, Try, Learn Meta-Learning from Demonstrations and Rewards* BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning* Just Ask for Generalization* To Understand Language is to Understand Generalization* Robots Must Be Ephemeralized Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Dec 9, 2021 • 1h 34min

Rishi Bommasani on Foundation Models

In episode 19 of The Gradient Podcast, we talk to Rishi Bommasani, a Ph.D student at Stanford focused on Foundation Models. Rish is a second-year Ph.D. student in the CS Department at Stanford, where he is advised by Percy Liang and Dan Jurafsky. His research focuses on understanding AI systems and their social impact, as well as using NLP to further scientific inquiry. Over the past year, he helped build and organize the Stanford Center for Research on Foundation Models (CRFM).Sections:(00:00:00) Intro(00:01:05) How did you get into AI?(00:09:55) Towards Understanding Position Embeddings(00:14:23) Long-Distance Dependencies don’t have to be Long(00:18:55) Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings(00:30:25) Masters Thesis(00:34:05) Start of PhD and work on foundation models(00:42:14) Why were people intested in foundation models(00:46:45) Formation of CRFM(00:51:25) Writing report on foundation models(00:56:33) Challenges in writing report(01:05:45) Response to reception(01:15:35) Goals of CRFM(01:25:43) Current research focus(01:30:35) Interests outside of research(01:33:10) OutroPapers discussed:* Towards Understanding Position Embeddings* Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations* Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings* Generalized Optimal Linear Orders* On the Opportunities and Risks of Foundation Models* Reflections on Foundation Models Get full access to The Gradient at thegradientpub.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app