The Gradient: Perspectives on AI

Alex Tamkin on Self-Supervised Learning and Large Language Models

Nov 11, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
AI and the Future of Photography
02:55 • 2min
3
How to Keep Doing Research
04:27 • 2min
4
The Paradigm Shift in Deep Learning
06:22 • 2min
5
How to Scale Up AI
08:03 • 5min
6
The Benefits of ImageNet for Language Modeling
12:59 • 3min
7
ViewMaker Networks Learning Views for Unsupervised Representation Learning
16:15 • 3min
8
The Future of Neural Networks for Contrastive Learning
19:35 • 3min
9
The Benefits of Self Supervised Learning
22:06 • 3min
10
The Importance of Input Dependent Views
25:05 • 2min
11
The Journey of Augmentation in Supervised Learning
26:50 • 3min
12
The Power of Practice
30:19 • 2min
13
DABs: A Benchmark for Self-Supervised Learning
31:54 • 5min
14
The Principles of Self-Supervised Learning
36:44 • 2min
15
The Importance of Pre-Training Objectives
39:03 • 3min
16
How to Train a Model to Do Two Domains
41:50 • 2min
17
The Future of Wearable Sensors
44:10 • 2min
18
The Interconnectedness of Machine Learning
45:58 • 3min
19
The Complexity of Large Language Models
48:47 • 5min
20
The Importance of Consequences in Research
53:31 • 2min
21
How to Foster a Healthy and Inclusive Research Culture
55:23 • 4min
22
The Importance of Hanging Out
59:36 • 2min
23
The Importance of Mental Well-Being
01:01:10 • 2min
24
How to Be a Better Mentor for Undergraduates
01:03:31 • 2min
25
How to Stay Sane During the Pandemic
01:05:36 • 2min
26
Gradient Podcast: How I Learned to Photograph
01:07:43 • 3min