Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Introduction
00:00 • 2min
The Future of Machine Learning
01:59 • 2min
The History of Random Exploration
04:17 • 6min
The Non-Causal Model of Predicting the Next State in a Stack of Frames
10:04 • 3min
The Inverse Dynamic Model of Learning to Play Minecraft
12:50 • 2min
AIGA: A Research Paradigm for Machine Learning
14:37 • 4min
AIGA: A Paradigm for Automated Training Environments
18:34 • 2min
The Future of Quality Diversity
20:30 • 4min
How to Adapt to Injury in a Fast Way
24:07 • 3min
How to Scale a Map Elites to Two More Dimensions
27:10 • 4min
The Achilles Heel of Reinforcement Learning
30:52 • 2min
The Pathologies of Detachment in Reinforcement Learning
33:09 • 6min
How to Save a Simulator State in a Video Game
39:08 • 5min
The Benefits of the Policy Version of the Algorithm
44:15 • 2min
How to Use Go Explorer to Solve Hard RL Problems
46:15 • 3min
Open-Ended Algorithms in Machine Learning
49:31 • 5min
The Goal of Open Endowed Learning Systems
54:16 • 5min
AI GA and the Chat GPTs of the World
59:04 • 5min
The Future of AGI
01:03:46 • 3min
The Non-Random Part of Evolution
01:06:26 • 5min