Robotics: Science and Systems XVIII
FaDIV-Syn: Fast Depth-Independent View Synthesis using Soft Masks and Implicit Blending
Andre Rochow, Max Schwarz, Michael Weinmann, Sven BehnkeAbstract:
Novel view synthesis is required in many robotic applications, such as VR teleoperation and scene reconstruction. Existing methods are often too slow for these contexts, cannot handle dynamic scenes, and are limited by their explicit depth estimation stage, where incorrect depth predictions can lead to large projection errors. Our proposed method runs in real time on live streaming data and avoids explicit depth estimation by efficiently warping input images into the target frame for a range of assumed depth planes. The resulting plane sweep volume (PSV) is directly fed into our network, which first estimates soft PSV masks in a self-supervised manner, and then directly produces the novel output view. This improves efficiency and performance on transparent, reflective, thin, and feature-less scene parts. FaDIV-Syn can perform both interpolation and extrapolation tasks at 540p in real-time and outperforms state-of-the-art extrapolation methods on the large-scale RealEstate10k dataset. We thoroughly evaluate ablations, such as removing the Soft-Masking network, training from fewer examples as well as generalization to higher resolutions and stronger depth discretization. Our implementation is available.
Bibtex:
@INPROCEEDINGS{Rochow-RSS-22, AUTHOR = {Andre Rochow AND Max Schwarz AND Michael Weinmann AND Sven Behnke}, TITLE = {{FaDIV-Syn: Fast Depth-Independent View Synthesis using Soft Masks and Implicit Blending}}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2022}, ADDRESS = {New York City, NY, USA}, MONTH = {June}, DOI = {10.15607/RSS.2022.XVIII.054} }