Home Abstract BibTex

Overview Video

Please watch the video here if Youtube doesn't work for you.

Abstract

We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene. Our work builds upon recent advances in neural implicit representation and uses continuous and differentiable functions for modeling the time-varying structure and the appearance of the scene. We jointly train a time-invariant static NeRF and a time-varying dynamic NeRF, and learn how to blend the results in an unsupervised manner. However, learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video). To resolve the ambiguity, we introduce regularization losses to encourage a more physically plausible solution. We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.

Paper


arXiv


Code


BibTex

@inproceedings{Gao-ICCV-DynNeRF,
  Author    = {Gao, Chen and Saraf, Ayush and Kopf, Johannes and Huang, Jia-Bin},
  Title     = {Dynamic View Synthesis from Dynamic Monocular Video},
  booktitle = {Proceedings of the IEEE International Conference on Computer Vision},
  year      = {2021}
}