AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Jacob Forrester's research focus is identifying the blind spots in the research landscape and filling them. He gave examples from his past work, such as studying multi-agent learning when it was a gap in the research. Going forward, he aims to understand the limitations of current methods and address them.
Jacob Forrester is fascinated by unsupervised environment design (UD) and its application in multi-agent learning. He discusses how UD involves discovering environment distributions that lead to specific translation results and allow agents to generalize corner cases. The challenge lies in addressing the interaction of learning systems and bridging the sim-to-real gap effectively.
Multi-agent RL presents challenges that differ from single-agent RL. Jacob Forrester emphasizes that in multi-agent scenarios, the guarantees of convergence and performance that apply to single agents break down. Non-stationarity and equilibrium selection become key challenges, leading to unexpected phenomena like the iterated prisoner's dilemma. Understanding these challenges is crucial for effective multi-agent learning.
Jacob Forrester discusses the role of multi-agent learning in the path towards powerful AI and potentially AGI. He acknowledges that the interaction of intelligent agents has likely driven the development of intelligence throughout evolution. While current large-scale language models play a significant role in achieving human-like abilities, Forrester highlights the importance of multi-agent interaction and meta-evolution as potential avenues for surpassing human abilities.
Jacob Forrester's classic work on learning to communicate with deep multi-agent RL explored how agents could develop communication protocols. While now models like GPT have shown promising results in language-based tasks, Forrester is interested in combining these models with multi-agent learning to explore the emergence of novel skills and capabilities.
Jakob Foerster on Multi-Agent learning, Cooperation vs Competition, Emergent Communication, Zero-shot coordination, Opponent Shaping, agents for Hanabi and Prisoner's Dilemma, and more.
Jakob Foerster is an Associate Professor at University of Oxford.
Featured References
Learning with Opponent-Learning Awareness
Jakob N. Foerster, Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, Igor Mordatch
Model-Free Opponent Shaping
Chris Lu, Timon Willi, Christian Schroeder de Witt, Jakob Foerster
Off-Belief Learning
Hengyuan Hu, Adam Lerer, Brandon Cui, David Wu, Luis Pineda, Noam Brown, Jakob Foerster
Learning to Communicate with Deep Multi-Agent Reinforcement Learning
Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, Shimon Whiteson
Adversarial Cheap Talk
Chris Lu, Timon Willi, Alistair Letcher, Jakob Foerster
Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning
Yat Long Lo, Christian Schroeder de Witt, Samuel Sokota, Jakob Nicolaus Foerster, Shimon Whiteson
Additional References
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode