Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Introduction
00:00 • 2min
The Evolution of RLIA's Research
01:59 • 2min
The Importance of Using Random Seeds to Estimate Confidence Intervals
04:16 • 3min
BBF: A Collection of Improvements on Top of DQN
06:52 • 2min
Data Efficient Rainbow: A New Algorithm for Data Efficiency
08:39 • 2min
The Future of Deep Resonance Learning
10:17 • 3min
How to Scale Your Replay Ratio Barrier
12:47 • 3min
The Evolution of DQN Networks
15:48 • 2min
The Improvements in BBF
18:06 • 2min
The Importance of Network Scaling in Model Free Agents
19:56 • 3min
The Differences Between Network Scaling and Replay Ratio Scaling
23:10 • 3min
The Evolution of the Atari Environment
25:53 • 3min
How to Use Model Free Methods to Improve Performance
29:20 • 3min
The Importance of Decision Time Planning in Atari
32:01 • 2min
The Importance of Model-Based Learning
33:57 • 3min
The Advantages of Model-Based and Model-Free Algorithms
37:22 • 2min
The Evolution of Self-Predictive Representation Learning in Machine Learning
39:50 • 4min
The Differences Between BBF and Dreamer V3 on Atari 100K
43:31 • 2min
Exploration in BBF
45:40 • 3min
The Future of Search
48:24 • 4min
BBF: A Better Learning Algorithm Than Rainbow
52:53 • 2min
How to Train a Big Network Effectively With No Prior Knowledge
55:08 • 4min
The Importance of Prior Knowledge in Business
59:26 • 2min
The Balance Between Prior Knowledge and the Rest of the Algorithm
01:01:31 • 2min
The Future of RLA
01:03:25 • 4min
Chatshupiti and RL: A Key Component
01:07:06 • 2min
The Importance of RL in LLMs
01:08:36 • 2min