Synchronized Multi-Frame Diffusion for Temporally Consistent Video Stylization
dc.contributor.author | Xie, Minshan | en_US |
dc.contributor.author | Liu, Hanyuan | en_US |
dc.contributor.author | Li, Chengze | en_US |
dc.contributor.author | Wong, Tien-Tsin | en_US |
dc.contributor.editor | Bousseau, Adrien | en_US |
dc.contributor.editor | Day, Angela | en_US |
dc.date.accessioned | 2025-05-09T09:16:59Z | |
dc.date.available | 2025-05-09T09:16:59Z | |
dc.date.issued | 2025 | |
dc.description.abstract | Text-guided video-to-video stylization transforms the visual appearance of a source video to a different appearance guided on textual prompts. Existing text-guided image diffusion models can be extended for stylized video synthesis. However, they struggle to generate videos with both highly detailed appearance and temporal consistency. In this paper, we propose a synchronized multi-frame diffusion framework to maintain both the visual details and the temporal consistency. Frames are denoised in a synchronous fashion, and more importantly, information of different frames is shared since the beginning of the denoising process. Such information sharing ensures that a consensus, in terms of the overall structure and color distribution, among frames can be reached in the early stage of the denoising process before it is too late. The optical flow from the original video serves as the connection, and hence the venue for information sharing, among frames. We demonstrate the effectiveness of our method in generating high-quality and diverse results in extensive experiments. Our method shows superior qualitative and quantitative results compared to state-of-the-art video editing methods. | en_US |
dc.description.number | 2 | |
dc.description.sectionheaders | The Artful Edit: Stylization and Editing for Images and Video | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 44 | |
dc.identifier.doi | 10.1111/cgf.70095 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 12 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.70095 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf70095 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | CCS Concepts: Computing methodologies → Computer vision tasks; Image processing | |
dc.subject | Computing methodologies → Computer vision tasks | |
dc.subject | Image processing | |
dc.title | Synchronized Multi-Frame Diffusion for Temporally Consistent Video Stylization | en_US |