

TalkRL: The Reinforcement Learning Podcast
Robin Ranjit Singh Chauhan
TalkRL podcast is All Reinforcement Learning, All the Time.
In-depth interviews with brilliant people at the forefront of RL research and practice.
Guests from places like MILA, OpenAI, MIT, DeepMind, Berkeley, Amii, Oxford, Google Research, Brown, Waymo, Caltech, and Vector Institute.
Hosted by Robin Ranjit Singh Chauhan.
In-depth interviews with brilliant people at the forefront of RL research and practice.
Guests from places like MILA, OpenAI, MIT, DeepMind, Berkeley, Amii, Oxford, Google Research, Brown, Waymo, Caltech, and Vector Institute.
Hosted by Robin Ranjit Singh Chauhan.
Episodes
Mentioned books

Feb 10, 2025 • 1h 22min
Abhishek Naik on Continuing RL & Average Reward
Abhishek Naik, a postdoctoral fellow at the National Research Council of Canada, recently completed his PhD in reinforcement learning under Rich Sutton. He explores average reward methods and their implications for continuous decision-making in AI. The discussion dives into innovative applications in space exploration and challenges in resource allocation, drawing on examples like Mars rovers. Abhishek emphasizes the transformative power of first-principles thinking, highlighting how AI advancements are shaping the future of spacecraft control and missions.

Dec 23, 2024 • 18min
Neurips 2024 RL meetup Hot takes: What sucks about RL?
What do RL researchers complain about after hours at the bar? In this "Hot takes" episode, we find out! Recorded at The Pearl in downtown Vancouver, during the RL meetup after a day of Neurips 2024. Special thanks to "David Beckham" for the inspiration :)

Sep 20, 2024 • 13min
RLC 2024 - Posters and Hallways 5
David Radke from the Chicago Blackhawks shares insights on using reinforcement learning in professional sports to enhance team performance. Abhishek Naik discusses the significance of continuing reinforcement learning and average reward, sparking a conversation about adaptability in AI. Daphne Cornelisse dives into autonomous driving and multi-agent systems, focusing on how to improve human-like behavior. Shray Bansal examines cognitive bias in human-AI teamwork, while Claas Voelcker tackles the complexities of hopping in reinforcement learning. Each guest brings a unique perspective on cutting-edge research.

Sep 19, 2024 • 5min
RLC 2024 - Posters and Hallways 4
David Abel from DeepMind dives into the 'Three Dogmas of Reinforcement Learning,' offering fresh insights on foundational principles. Kevin Wang from Brown discusses innovative variable depth search methods for Monte Carlo Tree Search, enhancing efficiency. Ashwin Kumar from Washington University addresses fairness in resource allocation, highlighting ethical implications. Finally, Prabhat Nagarajan from UAlberta delves into Value overestimation, revealing its impact on decision-making in RL. This dynamic conversation touches on pivotal advancements and challenges in the field.

Sep 18, 2024 • 7min
RLC 2024 - Posters and Hallways 3
Posters and Hallway episodes are short interviews and poster summaries. Recorded at RLC 2024 in Amherst MA. Featuring: 0:01 Kris De Asis from Openmind on Time Discretization 2:23 Anna Hakhverdyan from U of Alberta on Online Hyperparameters 3:59 Dilip Arumugam from Princeton on Information Theory and Exploration 5:04 Micah Carroll from UC Berkeley on Changing preferences and AI alignment

Sep 16, 2024 • 16min
RLC 2024 - Posters and Hallways 2
Posters and Hallway episodes are short interviews and poster summaries. Recorded at RLC 2024 in Amherst MA. Featuring: 0:01 Hector Kohler from Centre Inria de l'Université de Lille with "Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning" 2:29 Quentin Delfosse from TU Darmstadt on "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents" 4:15 Sonja Johnson-Yu from Harvard on "Understanding biological active sensing behaviors by interpreting learned artificial agent policies" 6:42 Jannis Blüml from TU Darmstadt on "OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments" 8:20 Cameron Allen from UC Berkeley on "Resolving Partial Observability in Decision Processes via the Lambda Discrepancy" 9:48 James Staley from Tufts on "Agent-Centric Human Demonstrations Train World Models" 14:54 Jonathan Li from Rensselaer Polytechnic Institute

Sep 10, 2024 • 6min
RLC 2024 - Posters and Hallways 1
Posters and Hallway episodes are short interviews and poster summaries. Recorded at RLC 2024 in Amherst MA. Featuring: 0:01 Ann Huang from Harvard on Learning Dynamics and the Geometry of Neural Dynamics in Recurrent Neural Controllers 1:37 Jannis Blüml from TU Darmstadt on HackAtari: Atari Learning Environments for Robust and Continual Reinforcement Learning 3:13 Benjamin Fuhrer from NVIDIA on Gradient Boosting Reinforcement Learning 3:54 Paul Festor from Imperial College London on Evaluating the impact of explainable RL on physician decision-making in high-fidelity simulations: insights from eye-tracking metrics

Sep 2, 2024 • 8min
Finale Doshi-Velez on RL for Healthcare @ RCL 2024
Finale Doshi-Velez is a Professor at the Harvard Paulson School of Engineering and Applied Sciences. This off-the-cuff interview was recorded at UMass Amherst during the workshop day of RL Conference on August 9th 2024. Host notes: I've been a fan of some of Prof Doshi-Velez' past work on clinical RL and hoped to feature her for some time now, so I jumped at the chance to get a few minutes of her thoughts -- even though you can tell I was not prepared and a bit flustered tbh. Thanks to Prof Doshi-Velez for taking a moment for this, and I hope to cross paths in future for a more in depth interview. References Finale Doshi-Velez Homepage @ Harvard Finale Doshi-Velez on Google Scholar

Aug 28, 2024 • 16min
David Silver 2 - Discussion after Keynote @ RCL 2024
In a dynamic discussion, David Silver, a leading professor in reinforcement learning, dives into the nuances of meta-learning and planning algorithms. He explores how function approximators can enhance RL during inference and contrasts human cognition with machine learning systems in tackling complex problems. Silver also discusses the recent advancements in RL algorithms mentioned during his keynote at the RCL 2024, highlighting ongoing innovations in the field.

Aug 26, 2024 • 11min
David Silver @ RCL 2024
David Silver, a principal research scientist at DeepMind and a professor at UCL, dives deep into the evolution of reinforcement learning. He discusses the fascinating transition of AlphaFold from RL to supervised learning for protein folding and highlights RL's potential in protein design. Silver also reflects on how personal health impacts research output and shares insights on AlphaZero's learning strategies in various games. He encourages aspiring researchers to embrace boldness in their endeavors and sketches his journey towards advancing artificial general intelligence.