
Sergey Levine
Assistant Professor at UC Berkeley, focusing on research in deep robotic learning and continuous real-world machine learning.
Top 5 podcasts with Sergey Levine
Ranked by the Snipd community

80 snips
Feb 18, 2025 • 53min
π0: A Foundation Model for Robotics with Sergey Levine - #719
In this discussion, Sergey Levine, an associate professor at UC Berkeley and co-founder of Physical Intelligence, dives into π0, a groundbreaking general-purpose robotic foundation model. He explains its innovative architecture that combines vision-language models with a novel action expert. The conversation touches on the critical balance of training data, the significance of open-sourcing, and the impressive capabilities of robots like folding laundry effectively. Levine also highlights the exciting future of affordable robotics and the potential for diverse applications.

28 snips
Mar 17, 2024 • 43min
#176 Sergey Levine: Decoding The Evolution of AI in Robotics
Discover the latest advancements in AI-controlled robots with Sergey Levine, exploring reinforcement learning and embodied AI. Learn about the RTX project enhancing robots' ability to perform diverse tasks. Dive into the intersection of AI, robotics, and the quest for adaptable machines revolutionizing technology.

27 snips
Jan 16, 2023 • 60min
AI Trends 2023: Reinforcement Learning - RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine - #612
Sergey Levine, an associate professor at UC Berkeley, dives into cutting-edge advancements in reinforcement learning. He explores the impact of RLHF on language models and discusses innovations in offline RL and robotics. They also examine how language models can enhance diplomatic strategies and tackle ethical concerns. Sergey sheds light on manipulation in RL, the challenges of integrating robots with language models, and offers exciting predictions for 2023's developments. This is a must-listen for anyone interested in the future of AI!

17 snips
Mar 1, 2023 • 1h 35min
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems
Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems.

9 snips
Mar 9, 2020 • 43min
Advancements in Machine Learning with Sergey Levine - #355
In this episode, Sergey Levine, Assistant Professor at UC Berkeley and expert in deep robotic learning, shares insights from his latest research. He discusses how machines can learn continuously from real-world experiences, emphasizing the importance of integrating reinforcement learning with traditional planning. The conversation delves into causality in imitation learning, highlighting its impact on systems like autonomous vehicles. Sergey also navigates the complexities of model-based versus model-free reinforcement learning, shedding light on the importance of parameterization in deep learning.