Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
Introduction
00:00 • 2min
The Multi Agentiral Human in the Lup Lurning and Some of Its Applications
01:46 • 2min
How Did You Get Into Computer Vision?
03:26 • 4min
Human in the Loop Learning
07:18 • 2min
Cogment Paper
09:27 • 5min
The Use of an I and R L in Safety Critical Situations?
14:49 • 2min
Cogment Versus
16:39 • 2min
Asymmetric Self Play for Automatic Gold Discovery in Robotic Manipulation
18:20 • 3min
Is It Better to Train a Student to Find a Cold Condition?
20:51 • 3min
Is There a Chicken and Egg Problem?
23:32 • 2min
Multi Teacher Approach to Self Play in Deep Reinforcement Learning
25:19 • 2min
Multi Teachers - Is There Any Alternative to Fix the Distribution?
26:59 • 2min
Using Multiple Teachers Is a Good Idea
28:37 • 3min
The Relationship Between Paird and the Pard System
31:12 • 2min
The Interaction of Continuous Coordination
32:55 • 2min
Hanabi Is a Very Challenging Card Game
35:00 • 2min
Can Agents Collaborate?
36:38 • 2min
Co-Operator Game
38:57 • 2min
Is There an Alternative to Population Base Training?
40:49 • 3min
How to Navigate the Synthetically Ible Chemical Space
43:36 • 2min
How to Find the Reactant in a Continuous Space?
45:42 • 4min
How Do You Represent the Chemicals?
49:52 • 2min
Are You Using Graphic Representation in Drug Discovery?
51:40 • 3min
A Case of Coming Back to Chess
54:40 • 2min
Is Mith Missing in the Design?
56:51 • 3min
Inductive Bias
59:29 • 2min
Is There a Failure of the Function Proxy?
01:01:20 • 2min
What Do You Think About Explainability in Chess?
01:03:16 • 2min
Scaling Laws - Is Scaling All You Need?
01:05:15 • 3min


