TalkRL: The Reinforcement Learning Podcast

Robin Ranjit Singh Chauhan
undefined
Sep 18, 2024 • 7min

RLC 2024 - Posters and Hallways 3

Posters and Hallway episodes are short interviews and poster summaries.  Recorded at RLC 2024 in Amherst MA.  Featuring:  0:01 Kris De Asis from Openmind on Time Discretization  2:23 Anna Hakhverdyan from U of Alberta on Online Hyperparameters  3:59 Dilip Arumugam from Princeton on Information Theory and Exploration  5:04 Micah Carroll from UC Berkeley on Changing preferences and AI alignment  
undefined
Sep 16, 2024 • 16min

RLC 2024 - Posters and Hallways 2

Posters and Hallway episodes are short interviews and poster summaries.  Recorded at RLC 2024 in Amherst MA.  Featuring:  0:01 Hector Kohler from Centre Inria de l'Université de Lille with "Interpretable and Editable Programmatic Tree Policies for Reinforcement Learning"  2:29 Quentin Delfosse from TU Darmstadt on "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents"  4:15 Sonja Johnson-Yu from Harvard on "Understanding biological active sensing behaviors by interpreting learned artificial agent policies"  6:42 Jannis Blüml from TU Darmstadt on "OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments"  8:20 Cameron Allen from UC Berkeley on "Resolving Partial Observability in Decision Processes via the Lambda Discrepancy"  9:48 James Staley from Tufts on "Agent-Centric Human Demonstrations Train World Models"  14:54 Jonathan Li from Rensselaer Polytechnic Institute  
undefined
Sep 10, 2024 • 6min

RLC 2024 - Posters and Hallways 1

Posters and Hallway episodes are short interviews and poster summaries.  Recorded at RLC 2024 in Amherst MA.  Featuring:  0:01 Ann Huang from Harvard on Learning Dynamics and the Geometry of Neural Dynamics in Recurrent Neural Controllers  1:37 Jannis Blüml from TU Darmstadt on HackAtari: Atari Learning Environments for Robust and Continual Reinforcement Learning  3:13 Benjamin Fuhrer from NVIDIA on Gradient Boosting Reinforcement Learning  3:54 Paul Festor from Imperial College London on Evaluating the impact of explainable RL on physician decision-making in high-fidelity simulations: insights from eye-tracking metrics  
undefined
Sep 2, 2024 • 8min

Finale Doshi-Velez on RL for Healthcare @ RCL 2024

Finale Doshi-Velez is a Professor at the Harvard Paulson School of Engineering and Applied Sciences.  This off-the-cuff interview was recorded at UMass Amherst during the workshop day of RL Conference on August 9th 2024.   Host notes: I've been a fan of some of Prof Doshi-Velez' past work on clinical RL and hoped to feature her for some time now, so I jumped at the chance to get a few minutes of her thoughts -- even though you can tell I was not prepared and a bit flustered tbh.  Thanks to Prof Doshi-Velez for taking a moment for this, and I hope to cross paths in future for a more in depth interview. References  Finale Doshi-Velez Homepage @ Harvard  Finale Doshi-Velez on Google Scholar  
undefined
Aug 28, 2024 • 16min

David Silver 2 - Discussion after Keynote @ RCL 2024

In a dynamic discussion, David Silver, a leading professor in reinforcement learning, dives into the nuances of meta-learning and planning algorithms. He explores how function approximators can enhance RL during inference and contrasts human cognition with machine learning systems in tackling complex problems. Silver also discusses the recent advancements in RL algorithms mentioned during his keynote at the RCL 2024, highlighting ongoing innovations in the field.
undefined
Aug 26, 2024 • 11min

David Silver @ RCL 2024

David Silver, a principal research scientist at DeepMind and a professor at UCL, dives deep into the evolution of reinforcement learning. He discusses the fascinating transition of AlphaFold from RL to supervised learning for protein folding and highlights RL's potential in protein design. Silver also reflects on how personal health impacts research output and shares insights on AlphaZero's learning strategies in various games. He encourages aspiring researchers to embrace boldness in their endeavors and sketches his journey towards advancing artificial general intelligence.
undefined
Apr 8, 2024 • 40min

Vincent Moens on TorchRL

Vincent Moens, Applied ML Research Scientist at Meta and author of TorchRL, discusses the design philosophy and challenges in creating a versatile reinforcement learning library. He also shares his research journey from medicine to ML, evolution of RL perceptions in the AI community, and encourages active engagement in the open-source community.
undefined
18 snips
Mar 25, 2024 • 34min

Arash Ahmadian on Rethinking RLHF

Arash Ahmadian discusses preference training in language models, exploring methods like PPO. The podcast dives into reinforced leave one out method, reinforced vs vanilla policy gradient in deep RL, and token-level actions. Reward structures and optimization techniques in RLHF are also explored, emphasizing the importance of curated reward signals.
undefined
Mar 11, 2024 • 22min

Glen Berseth on RL Conference

Glen Berseth is an assistant professor at the Université de Montréal, a core academic member of the Mila - Quebec AI Institute, a Canada CIFAR AI chair, member l'Institute Courtios, and co-director of the Robotics and Embodied AI Lab (REAL).  Featured Links  Reinforcement Learning Conference  Closing the Gap between TD Learning and Supervised Learning--A Generalisation Point of View Raj Ghugare, Matthieu Geist, Glen Berseth, Benjamin Eysenbach
undefined
52 snips
Mar 7, 2024 • 1h 8min

Ian Osband

A Research scientist at OpenAI discusses information theory and RL, joint predictions, and Epistemic Neural Networks. They explore challenges in reinforcement learning, handling uncertainty, and balancing exploration vs exploitation. The podcast delves into the importance of joint predictive distributions, Thompson sampling approximation, and uncertainty frameworks in Large Language Models (LLMs).

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app