RoboPapers

Ep#4: Vision Language Models are In-Context Value Learners

Apr 8, 2025
In this engaging discussion, Jason Ma, a final year PhD student at the University of Pennsylvania, unveils his insights on Vision Language Models and their role in enhancing robotic performance. The conversation covers groundbreaking methodologies for tracking robotic task progress and evaluates the significance of high-quality datasets in imitation learning. They also explore challenges like negative correlations in trajectories and examine how self-supervised learning can optimize robotic systems. Tune in for fascinating perspectives on the future of robotics and automation!
Ask episode
Chapters
Transcript
Episode notes