TalkRL: The Reinforcement Learning Podcast

Max Schwarzer

Aug 8, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
The Evolution of RLIA's Research
01:59 • 2min
3
The Importance of Using Random Seeds to Estimate Confidence Intervals
04:16 • 3min
4
BBF: A Collection of Improvements on Top of DQN
06:52 • 2min
5
Data Efficient Rainbow: A New Algorithm for Data Efficiency
08:39 • 2min
6
The Future of Deep Resonance Learning
10:17 • 3min
7
How to Scale Your Replay Ratio Barrier
12:47 • 3min
8
The Evolution of DQN Networks
15:48 • 2min
9
The Improvements in BBF
18:06 • 2min
10
The Importance of Network Scaling in Model Free Agents
19:56 • 3min
11
The Differences Between Network Scaling and Replay Ratio Scaling
23:10 • 3min
12
The Evolution of the Atari Environment
25:53 • 3min
13
How to Use Model Free Methods to Improve Performance
29:20 • 3min
14
The Importance of Decision Time Planning in Atari
32:01 • 2min
15
The Importance of Model-Based Learning
33:57 • 3min
16
The Advantages of Model-Based and Model-Free Algorithms
37:22 • 2min
17
The Evolution of Self-Predictive Representation Learning in Machine Learning
39:50 • 4min
18
The Differences Between BBF and Dreamer V3 on Atari 100K
43:31 • 2min
19
Exploration in BBF
45:40 • 3min
20
The Future of Search
48:24 • 4min
21
BBF: A Better Learning Algorithm Than Rainbow
52:53 • 2min
22
How to Train a Big Network Effectively With No Prior Knowledge
55:08 • 4min
23
The Importance of Prior Knowledge in Business
59:26 • 2min
24
The Balance Between Prior Knowledge and the Rest of the Algorithm
01:01:31 • 2min
25
The Future of RLA
01:03:25 • 4min
26
Chatshupiti and RL: A Key Component
01:07:06 • 2min
27
The Importance of RL in LLMs
01:08:36 • 2min