TalkRL: The Reinforcement Learning Podcast

Aravind Srinivas 2

May 9, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Using Pre-Trained Self-Supervised Representations in Deep Learning?
01:33 • 2min
3
RL and Decision Transformers - A Sequence Model
03:13 • 5min
4
How Does GPT Look Like a Transformer?
08:05 • 3min
5
How to Turn RL Into Supervised Learning?
11:07 • 3min
6
Why Unsupervised Learning Isn't Working for RL?
14:35 • 3min
7
Is It Important to Extrapolate Beyond the Training Data?
17:17 • 3min
8
TD Learning
20:26 • 4min
9
Is It Possible to Write the Ultimate RL Algorithm on a Whiteboard?
24:34 • 2min
10
Is It Really the Self Supervised Mechanism?
26:38 • 2min
11
The Trend Doesn't Look to Stop the Continued Diversity
28:56 • 2min
12
Is the Decision Transformer Really Relevant in the Big Data Regime?
30:59 • 3min
13
Decision Transformer
33:59 • 2min
14
Is There a Future for Decision Transformers?
36:24 • 3min
15
Decision Transformer - Model Free or Model Based?
39:19 • 3min
16
How to Evaluate a Video GPT Model?
42:04 • 3min
17
Video GBT
44:45 • 2min
18
The Video Generation Problem, Is It Really a Problem?
46:40 • 4min
19
Is VQBA a Distinctive VAE?
50:51 • 2min
20
Is There Something Causal Needed to Make Deep Learning Work?
52:58 • 2min
21
Are You Exploring or Exploring?
54:34 • 4min