In this episode, we discuss
Memory-V2V: Augmenting Video-to-Video Diffusion Models with Memory by Dohun Lee, Chun-Hao Paul Huang, Xuelin Chen, Jong Chul Ye, Duygu Ceylan, Hyeonho Jeong. The paper addresses the challenge of maintaining cross-consistency in multi-turn video editing using video-to-video diffusion models. It introduces Memory-V2V, a framework that enhances existing models by incorporating an explicit memory through an external cache of previously edited videos. This approach enables iterative video editing with improved consistency across multiple rounds of user refinements.