

25 - Cooperative AI with Caspar Oesterheld
Oct 3, 2023
Caspar Oesterheld discusses cooperative AI, its applications, and interactions between AI systems. They explore AI arms races, game theory limitations, and the challenges of aligning AI with human values. The podcast also covers regret minimization in decision-making, multi-armed bandit problem, logical induction, safe Pareto improvements, and similarity-based cooperation. They highlight the importance of communication, enforcement mechanisms, and the complexities of achieving effective cooperation and alignment in AI systems.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Introduction
00:00 • 2min
Cooperative AI and Existential Risks
01:57 • 14min
AI capabilities and strategic complexity
16:08 • 19min
Regret Minimization in Decision-Making
34:59 • 8min
The Multi-Armed Bandit Problem and Regret Minimization
43:20 • 19min
Logical Induction and Regret Minimization
01:02:38 • 16min
Balancing Payouts and Ensuring Infinite Money
01:18:45 • 6min
Using Cooperative AI for Game Theory
01:24:26 • 6min
Safe Pareto Improvements in Equilibrium Selection
01:30:40 • 6min
Commitments and Mutually Ignoring Commitments
01:36:31 • 4min
Safe Predator Improvements and Equilibrium Selection
01:40:09 • 11min
Safebrate Improvement and Potential Conflicts
01:51:19 • 25min
Similarity Based Cooperation and Cooperative Equilibria
02:15:53 • 14min
Cooperative AI and the Importance of Similarity
02:29:43 • 21min
Cooperative AI Lab at Carnegie Mellon University and How to Join
02:50:25 • 2min
Connection between Bounded Rational Inductive Agents and Similarity-based Cooperation
02:52:53 • 3min
Taking an Outside Perspective on Improving AI Agents
02:55:30 • 4min
Equilibrium Selection and Ways to Learn More
02:59:30 • 3min