The Gradient: Perspectives on AI cover image

The Gradient: Perspectives on AI

Latest episodes

undefined
Aug 4, 2023 • 2h 5min

Raphaël Millière: The Vector Grounding Problem and Self-Consciousness

In episode 84 of The Gradient Podcast, Daniel Bashir speaks to Professor Raphaël Millière.Professor Millière is a Lecturer (Assistant Professor) in the Philosophy of Artificial Intelligence at Macquarie University in Sydney, Australia. Previously, he was the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in Columbia University’s Center for Science and Society, and completed his DPhil in philosophy at the University of Oxford, where he focused on self-consciousness.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:20) Prof. Millière’s background* (08:07) AI + philosophy questions and the human side / empiricism* (18:38) Putting aside metaphysical issues* (20:28) Prof. Millière’s work on self-consciousness, does consciousness constitutively involve self-consciousness?* (32:05) Relationship to recent pronouncements about AI sentience* (41:54) Chatbots’ self-presentation as having a “self”* (51:05) Intro to grounding and related concepts* (1:00:06) The different types of grounding* (1:08:48) Lexical representations and things in the world, distributional hypothesis, concepts in LLMs* (1:21:40) Representational content and overcoming the vector grounding problem* (1:32:01) Causal-informational relations and teleology* (1:43:45) Levels of grounding, extralinguistic aspects of meaning* (1:52:12) Future problems and ongoing projects* (2:04:05) OutroLinks:* Professor Millière’s homepage and Twitter* Research* Are There Degrees of Self-Consciousness?* The Varieties of Selflessness* Selfless Memories* The Vector Grounding Problem Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jul 27, 2023 • 2h 34min

Peli Grietzer: A Mathematized Philosophy of Literature

In episode 83 of The Gradient Podcast, Daniel Bashir speaks to Peli Grietzer. Peli is a scholar whose work borrows mathematical ideas from machine learning theory to think through “ambient” and ineffable phenomena like moods, vibes, cultural logics, and structures of feeling. He is working on a book titled Big Mood: A Transcendental-Computational Essay in Art and contributes to the experimental literature collective Gauss PDF. Peli has a PhD in mathematically informed literary theory from Harvard Comparative Literature in collaboration with the HUJI Einstein Institute of Mathematics.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:17) Peli’s background* (10:40) Daniel takes 2 entire minutes to ask how Peli thinks about ~ Art ~* (26:10) Idealism and art as revealing the nature of reality, extralinguistic experiences of truth through literature* (52:05) The autoencoder as a way to understand Romantic theories of art* (1:14:55) More on how Peli thinks about autoencoders* (1:18:05) Connections to ambient meaning, stimmung/mood* (1:37:18) Examples of poetry/literature as mathematical experience, aesthetic unity and totalizing worldviews* (1:51:15) Moods clashing within a single work* (2:10:14) Modernist writers* (2:32:46) OutroLinks:* Peli’s Twitter* A Theory of Vibe* Why poetry is a variety of mathematical experience* Peli’s thesis Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jul 20, 2023 • 1h 7min

Ryan Drapeau: Battling Fraud with ML at Stripe

In episode 82 of The Gradient Podcast, Daniel Bashir speaks to Ryan Drapeau.Ryan is a Staff Software Engineer at Stripe and technical lead for Stripe’s Payment Fraud organization, which uses machine learning to help prevent billions of dollars of credit card and payments fraud for business every year.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:15) Ryan’s background* (05:28) Differences between adversarial problems (fraud, content moderation, etc.)* (08:50) How fraud manifests for businesses* (11:07) Types of fraud* (15:49) Fraud as an industry* (19:05) Information asymmetries between fraudsters and defenders* (22:40) Fraud as an ML problem and Stripe Radar* (25:45) Evolution of Stripe Radar* (31:38) Architectural evolution* (41:38) Why ResNets for Stripe Radar?* (44:15) Future architectures for Stripe Radar and the explainability/performance tradeoff* (48:58) War stories* (52:55) Federated learning opportunities for Stripe Radar* (55:50) Vectors for improvement in Stripe’s fraud detection systems* (59:22) More ways of thinking about the fraud problem, multiclass models* (1:03:30) Lessons Ryan has picked up from working on fraud* (1:05:44) OutroLinks:* How We Built It: Stripe Radar* Stripe 2022 Update Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jul 13, 2023 • 1h 1min

Shiv Rao: Enabling Better Patient Care with AI

In episode 81 of The Gradient Podcast, Daniel Bashir speaks to Shiv Rao.Shiv Rao, MD is the co-founder and CEO of Abridge, a healthcare conversation company that uses cutting-edge NLP and generative AI to bring context and understanding to every medical conversation. Shiv previously served as an Executive Vice President at UPMC Enterprises, managing the provider-facing portfolio of technology investments and R&D. He is a practicing cardiologist in UPMC's Heart and Vascular Institute.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:34) Shiv’s medicine/technology/VC background* (05:45) Difficulties for tech in healthcare and how this informs Shiv’s approach* (10:52) “Productivity with a smile” and how AI can make medicine feel more human* (12:35) Shiv’s experiences in medicine and how Abridge’s product helps doctors* (16:10) How the role of a clinical team could evolve* (19:30) Abridge’s partnerships and real-life use cases* (23:00) Shiv’s perspectives on concerns about bias/trust/privacy* (25:25) Clinical decision support vs “automating doctors”* (29:07) Transparency and Abridge’s user experience* (35:20) Algorithmic solutionism vs human-focused approaches to technology development * (38:50) Ways AI might impact healthcare* (41:10) Generative AI applications* (45:00) Generative AI opportunities beyond documentation* (49:25) Innovation and reducing friction, UX* (50:56) Why people make wild predictions about AI* (54:25) What it means to “automate away” a doctor, how we’re misusing the medical workforce* (56:10) Shiv’s advice for people interested in AI + healthcare* (1:00:04) OutroLinks:* Abridge Homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jul 6, 2023 • 1h 48min

Hugo Larochelle: Deep Learning as Science

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle. Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:38) Prof. Larochelle’s background, working in Bengio’s lab* (04:53) Prof. Larochelle’s work and connectionism* (08:20) 2004-2009, work with Bengio* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning* (13:58) Manifold learning in vision and language* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining* (21:00) From input copying to learning about local distribution structure* (22:30) Zero-Data Learning of New Tasks* (22:45) The phrase “extend machine learning towards AI” and terminology* (26:55) Prescient hints of prompt engineering* (29:10) Daniel goes on totally unnecessary tangent* (30:00) Methods for training deep networks (strategies and robust interdependent codes)* (33:45) Motivations for layer-wise pretraining* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab* (40:00) Reflections on the AlexNet moment* (41:45) Frustration with methods for evaluating unsupervised methods, NADE* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures* (47:40) The Restricted Boltzmann Forest* (50:45) Imposing structure for tractable learning of distributions* (53:11) 2011-2016 at U Sherbooke (and Twitter)* (53:45) How Prof. Larochelle approached research problems* (56:00) How Domain Adversarial Networks came about* (57:12) Can we still learn from Restricted Boltzmann Machines?* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine* (1:06:55) The need for researchers doing different sorts of work* (1:08:58) 2017-present, at MILA (and Google)* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)* (1:15:10) Meta-Dataset motivation* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”* (1:19:15) Head2Toe intuitions* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML* (1:36:18) The creation and motivations of the TMLR journal* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today* (1:47:41) OutroLinks:* Professor Larochelle’s homepage and Twitter* Transactions on Machine Learning Research* Papers* 2004-2009* Nonlocal Estimation of Manifold Structure* Classification using Discriminative Restricted Boltzmann Machines* Zero-data learning of new tasks* Exploring Strategies for Training Deep Neural Networks* Deep Learning using Robust Interdependent Codes* 2009-2011* Stacked Denoising Autoencoders* Tractable multivariate binary density estimation and the restricted Boltzmann forest* The Neural Autoregressive Distribution Estimator* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks* 2011-2016* Practical Bayesian Optimization of Machine Learning Algorithms* Learning Algorithms for the Classification Restricted Boltzmann Machine* A neural autoregressive topic model* Domain-Adversarial Training of Neural Networks* NADE* An Infinite Restricted Boltzmann Machine* 2017-present* Modulating early visual processing by language* Meta-Dataset* A Universal Representation Transformer Layer for Few-Shot Image Classification* Learning a universal template for few-shot dataset generalization* Impact of aliasing on generalization in deep convolutional networks* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning* Fortuitous Forgetting in Connectionist Networks Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jun 29, 2023 • 1h 31min

Jeremie Harris: Realistic Alignment and AI Policy

In episode 79 of The Gradient Podcast, Daniel Bashir speaks to Jeremie Harris.Jeremie is co-founder of Gladstone AI, author of the book Quantum Physics Made Me Do It, and co-host of the Last Week in AI Podcast. Jeremy previously hosted the Towards Data Science podcast and worked on a number of other startups after leaving a PhD in physics.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:37) Jeremie’s physics background and transition to ML* (05:19) The physicist-to-AI person pipeline, how Jeremie’s background impacts his approach to AI* (08:20) A tangent on inflationism/deflationism about natural laws (I promise this applies to AI)* (11:45) How ML implies a particular viewpoint on the above question* (13:20) Jeremie’s first (recommendation systems) company, how startup founders can make mistakes even when they’ve read Paul Graham essays* (17:30) Classic startup wisdom, different sorts of startups* (19:35) OpenAI’s approach in shipping features for DALL-E 2 and generation vs. discrimination as an approach to product* (24:55) Capabilities and risk* (26:43) Commentary on fundamental limitations of alignment in LLMs* (30:45) Intrinsic difficulties in alignment problems* (41:15) Daniel tries to steel man / defend anti-longtermist arguments (nicely :) )* (46:23) Anthropic’s paper on asking models to be less biased* (47:20) Why Jeremie is excited about Anthropic’s Constitutional AI scheme* (51:05) Jeremie’s thoughts on recent Eliezer discourse* (56:50) Cheese / task vectors and steerability/controllability in LLMs* (59:50) Difficulty of one-shot solutions in alignment work, better strategies* (1:02:00) Lack of theoretical understanding of deep learning systems / alignment* (1:04:50) Jeremie’s work and perspectives on AI policy* (1:10:00) Incrementality in convincing policymakers* (1:14:00) How recent developments impact policy efforts* (1:16:20) Benefits and drawbacks of open source* (1:19:30) Arguments in favor of (limited) open source* (1:20:35) Quantum Physics (not Mechanics) Made Me Do It* (1:24:10) Some theories of consciousness and corresponding physics* (1:29:49) OutroLinks:* Jeremie’s Twitter* Quantum Physics Made Me Do It* Gladstone AI Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jun 22, 2023 • 60min

Antoine Blondeau: Alpha Intelligence Capital and Investing in AI

In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when the firm worked on CALO, one of the biggest AI projects in US history and precursor to Apple’s Siri. Later, he co-founded Sentient Technologies, which boasted the title of world’s highest funded AI company in 2016. In 2018, he founded Alpha Intelligence Capital to support future AI unicorns, and has raised more than $300 million, which has been deployed into 31 companies.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:30) Antoine’s background* (04:00) Dejima and the CALO cognitive assistant (the precursor to Siri)* (07:35) More detail on CALO* (10:10) Sentient Technologies and entrepreneurship during the AlexNet moment* (14:35) Early predictions on scale* (17:15) Role of evolutionary computation and neuroevolution* (20:00) Antoine’s motivations for becoming an investor* (22:30) Alpha Intelligence Capital’s investment focus* (27:40) Safety and trust issues in fully automated systems* (37:00) Models of culture, discernment in the use of AI systems* (39:30) Antoine’s experience as an investor in today’s AI environment* (44:43) How modern LLMs impact standard advice regarding the appropriateness of cutting-edge technologies in business* (49:00) Data (and other) moats* (52:07) Application/research areas Antoine is watching* (55:00) Antoine’s advice for people watching AI’s current developments* (58:47) OutroLinks:* Alpha Intelligence Capital Homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jun 15, 2023 • 2h 21min

Joon Park: Generative Agents and Human-Computer Interaction

In episode 77 of The Gradient Podcast, Daniel Bashir speaks to Joon Park.Joon is a third-year PhD student at Stanford, advised by Professors Michael Bernstein and Percy Liang. He designs, builds, and evaluates interactive systems that support new forms of human-computer interaction by leveraging state-of-the-art advances in natural language processing such as large language models. His research introduced the concept of, and the techniques for building generative agents—computational software agents that simulate believable human behavior. Joon’s work has been supported by the Microsoft Research PhD Fellowship, the Stanford School of Engineering Fellowship, and the Siebel Scholarship.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:43) Joon’s path from studio art to social computing / AI* (05:00) Joon’s perspectives on Human-Computer Interaction (HCI) and its recent evolution* (06:45) How foundation models enter the picture* (10:28) On slow algorithms and technology: A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy* (12:10) Motivations* (17:55) The jellybean-counting task, hypotheses* (22:00) Applications and takeaways* (28:05) Deliberate engagement in social media / computing systems, incentives* (32:55) Daniel rants about The Social Dilemma + anti- social media rhetoric, Joon on the role of academics, framings of addiction* (39:05) Measuring the Prevalence of Anti-Social Behavior in Online Communities* (48:30) Statistics on anti-social behavior and anecdotal information, limitations in the paper’s measurements* (51:45) Participatory and value-sensitive design* (52:50) “Interaction” in On the Opportunities and Risks of Foundation Models* (53:45) Broader insights on foundation models and emergent behavior* (56:50) Joon’s section on interaction* (1:01:05) Daniel’s bad segue to Social Simulacra: Creating Populated Prototypes for Social Computing Systems* (1:02:50) Context for Social Simulacra and Generative Agents, why Social Simulacra was tackled first* (1:24:05) The value of norms* (1:26:20) Collaborations between designers and developers of social simulacra* (1:30:00) Generative Agents: Interactive Simulacra of Human Behavior* (1:30:30) Context / intro* (1:45:10) On (too much) coherence in generative agents and believability* (1:52:02) Instruction tuning’s impact on generative agents, model alignment w/ believability goals, desirability of agent conflict / toxic LLMs* (1:56:55) Release strategies and toxicity in LLMs* (2:03:05) On designing interfaces and responsible use* (2:09:05) Capability advances and the capability-safety research gap* (2:14:12) Worries about LLM integration, human-centered framework for technology release / LLM incorporation* (2:18:00) Joon’s philosophy as an HCI researcher* (2:20:39) OutroLinks:* Joon’s homepage and Twitter* Research* A Slow Algorithm Improves Users’ Assessments of the Algorithm’s Accuracy* Measuring the Prevalence of Anti-Social Behavior in Online Communities* On the Opportunities and Risks of Foundation Models* Social Simulacra: Creating Populated Prototypes for Social Computing Systems* Generative Agents: Interactive Simulacra of Human Behavior Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jun 8, 2023 • 1h 9min

Christoffer Holmgård: AI for Video Games

In episode 76 of The Gradient Podcast, Andrey Kurenkov speaks to Dr Christoffer HolmgårdDr. Holmgård is a co-founder and the CEO of Modl.ai, which is building AI Engine for game development. Before starting the company, Christoffer was director of the indie game studio Die Gute Fabrik (which is German for "The Good Factory"), and has also done extensive research as an assistant professor in AI and Machine Learning for Games at Northeastern University. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:(00:00) Intro(01:30) History with video games (06:30) History with AI(09:40) Modeling stress responses in virtual environments(13:30) Play style personas from empirical data(17:15) Automating video game testing(21:00) Video game development(28:15) modl.ai(33:45) Automated playtesting with procedural personas through MCTS with evolved heuristics(35:40) Thoughts on recent AI progress(40:50) RL for game testing(44:40) AI in Minecraft(47:50) Impact of AI on video game development(01:00:00) Ethics of Gen AI (01:06:20) Hobbies / Interests (01:08:30) Outro Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jun 1, 2023 • 60min

Riley Goodside: The Art and Craft of Prompt Engineering

In episode 75 of The Gradient Podcast, Daniel Bashir speaks to Riley Goodside. Riley is a Staff Prompt Engineer at Scale AI. Riley began posting GPT-3 prompt examples and screenshot demonstrations in 2022. He previously worked as a data scientist at OkCupid, Grindr, and CopyAI.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:37) Riley’s journey to becoming the first Staff Prompt Enginer* (02:00) data science background in online dating industry* (02:15) Sabbatical + catching up on LLM progress* (04:00) AI Dungeon and first taste of GPT-3* (05:10) Developing on codex, ideas about integrating codex with Jupyter Notebooks, start of posting on Twitter* (08:30) “LLM ethnography”* (09:12) The history of prompt engineering: in-context learning, Reinforcement Learning from Human Feedback (RLHF)* (10:20) Models used to be harder to talk to* (10:45) The three eras* (10:45) 1 - Pre-trained LM era—simple next-word predictors* (12:54) 2 - Instruction tuning* (16:13) 3 - RLHF and overcoming instruction tuning’s limitations* (19:24) Prompting as subtractive sculpting, prompting and AI safety* (21:17) Riley on RLHF and safety* (24:55) Riley’s most interesting experiments and observations* (25:50) Mode collapse in RLHF models* (29:24) Prompting models with very long instructions* (33:13) Explorations with regular expressions, chain-of-thought prompting styles* (36:32) Theories of in-context learning and prompting, why certain prompts work well* (42:20) Riley’s advice for writing better prompts* (49:02) Debates over prompt engineering as a career, relevance of prompt engineers* (58:55) OutroLinks:* Riley’s Twitter and LinkedIn* Talk: LLM Prompt Engineering and RLHF: History and Techniques Get full access to The Gradient at thegradientpub.substack.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode