Synchronized Multi-Frame Diffusion for Temporally Consistent Video Stylization

No Thumbnail Available
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Text-guided video-to-video stylization transforms the visual appearance of a source video to a different appearance guided on textual prompts. Existing text-guided image diffusion models can be extended for stylized video synthesis. However, they struggle to generate videos with both highly detailed appearance and temporal consistency. In this paper, we propose a synchronized multi-frame diffusion framework to maintain both the visual details and the temporal consistency. Frames are denoised in a synchronous fashion, and more importantly, information of different frames is shared since the beginning of the denoising process. Such information sharing ensures that a consensus, in terms of the overall structure and color distribution, among frames can be reached in the early stage of the denoising process before it is too late. The optical flow from the original video serves as the connection, and hence the venue for information sharing, among frames. We demonstrate the effectiveness of our method in generating high-quality and diverse results in extensive experiments. Our method shows superior qualitative and quantitative results compared to state-of-the-art video editing methods.
Description

CCS Concepts: Computing methodologies → Computer vision tasks; Image processing

        
@article{
10.1111:cgf.70095
, journal = {Computer Graphics Forum}, title = {{
Synchronized Multi-Frame Diffusion for Temporally Consistent Video Stylization
}}, author = {
Xie, Minshan
and
Liu, Hanyuan
and
Li, Chengze
and
Wong, Tien-Tsin
}, year = {
2025
}, publisher = {
The Eurographics Association and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.70095
} }
Citation