MoStGAN-V: Video Generation with Temporal Motion Styles

King Abdullah University of Science and Technology (KAUST), Saudi Arabia
CVPR 2023

Abstract

Video generation remains a challenging task due to spatiotemporal complexity and the requirement of synthesizing diverse motions with temporal consistency. Previous works attempt to generate videos in arbitrary lengths either in an autoregressive manner or regarding time as a continuous signal. However, they struggle to synthesize detailed and diverse motions with temporal coherence and tend to generate repetitive scenes after a few time steps. In this work, we argue that a single time-agnostic latent vector of style-based generator is insufficient to model various and temporally-consistent motions. Hence, we introduce additional time-dependent motion styles to model diverse motion patterns. In addition, a Motion Style Attention modulation mechanism, dubbed as MoStAtt, is proposed to augment frames with vivid dynamics for each specific scale (i.e., layer), which assigns attention score for each motion style w.r.t deconvolution filter weights in the target synthesis layer and softly attends different motion styles for weight modulation. Experimental results show our model achieves state-of-the-art performance on four unconditional 256x256 video synthesis benchmarks trained with only 3 frames per clip and produces better qualitative results with respect to dynamic motions.

MY ALT TEXT

Architecture Overview

CelebV-HQ 256x256

Real Videos

MoCoGAN-HD

DIGAN

StyleGAN-V

MoStGAN-V (ours)

FaceForensics 256x256

Real Videos

MoCoGAN-HD

DIGAN

StyleGAN-V

MoStGAN-V (ours)

SkyTimelapse 256x256

Real Videos

MoCoGAN-HD

DIGAN

StyleGAN-V

MoStGAN-V (ours)

RainbowJelly 256x256

Real Videos

MoCoGAN-HD

DIGAN

StyleGAN-V

MoStGAN-V (ours)

Horseback 256x256

Real Videos

StyleGAN-V

MoStGAN-V (ours)

Diverse Videos

BibTeX

@inproceedings{shen2023mostgan,
  title={MoStGAN-V: Video Generation with Temporal Motion Styles},
  author={Shen, Xiaoqian and Li, Xiang and Elhoseiny, Mohamed},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5652--5661},
  year={2023}
}