The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Advancements in Machine Learning with Sergey Levine - #355

9 snips
Mar 9, 2020
In this episode, Sergey Levine, Assistant Professor at UC Berkeley and expert in deep robotic learning, shares insights from his latest research. He discusses how machines can learn continuously from real-world experiences, emphasizing the importance of integrating reinforcement learning with traditional planning. The conversation delves into causality in imitation learning, highlighting its impact on systems like autonomous vehicles. Sergey also navigates the complexities of model-based versus model-free reinforcement learning, shedding light on the importance of parameterization in deep learning.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Planning and Learning

  • Combine model-free reinforcement learning with planning to achieve better outcomes.
  • Learned behaviors provide abstractions for planning, enabling more complex tasks.
INSIGHT

Abstractions for Planning

  • Model-based planning relies on predictive models, while learned abstractions offer a different approach.
  • Learned abstractions simplify planning by lifting away from low-level physical grounding.
INSIGHT

Hierarchical Learning

  • Hierarchical learning simplifies the higher-level problem by abstracting behaviors and states.
  • Bottom-up skill discovery works better than top-down task decomposition.
Get the Snipd Podcast app to discover more snips from this episode
Get the app