
AI Breakdown
Arxiv paper - Reinforcement Learning for Reasoning in Large Language Models with One Training Example
May 8, 2025
07:19
In this episode, we discuss Reinforcement Learning for Reasoning in Large Language Models with One Training Example by Yiping Wang, Qing Yang, Zhiyuan Zeng, Liliang Ren, Lucas Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, Weizhu Chen, Shuohang Wang, Simon Shaolei Du, Yelong Shen. The paper demonstrates that reinforcement learning with verifiable reward using only one or two training examples (1-shot RLVR) substantially improves mathematical reasoning in large language models, nearly doubling performance on benchmarks like MATH500. This method generalizes across different models, algorithms, and examples, showing unique phenomena such as post-saturation generalization and the importance of policy gradient loss and exploration encouragement. The authors provide open-source code and data, highlighting the potential for more data-efficient RLVR approaches in improving LLM capabilities.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.