Robotics: Science and Systems XIV

View Selection with Geometric Uncertainty Modeling

Cheng Peng, Volkan Isler


Estimating positions of world points from features observed in images is a key problem in 3D reconstruction, image mosaicking, simultaneous localization and mapping and structure from motion. We consider a special instance in which there is a dominant ground plane G viewed from a parallel viewing plane S above it. Such instances commonly arise, for example, in aerial photography. Consider a world point g in G and its worst case reconstruction uncertainty epsilon(g,S obtained by merging all possible views of g chosen from S. We first show that one can pick two views s_p and s_q such that the uncertainty epsilon(g,{s_p,s_q}) obtained using only these two views is almost as good as (i.e. within a small constant factor of) epsilon(g,S). Next, we extend the result to the entire ground plane G and show that one can pick a small subset S' of S (which grows only linearly with the area of G) and still obtain a constant factor approximation, for every point g in G, to the minimum worst case estimate obtained by merging all views in S. Finally, we present a multi-resolution view selection method which extends our techniques to non-planar scenes. We show that the method can produce rich and accurate dense reconstructions with a small number of views. Our results provide a view selection mechanism with provable performance guarantees which can drastically increase the speed of scene reconstruction algorithms. In addition to theoretical results, we demonstrate their effectiveness in an application where aerial imagery is used for monitoring farms and orchards.



    AUTHOR    = {Cheng Peng AND Volkan Isler}, 
    TITLE     = {View Selection with Geometric Uncertainty Modeling}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2018}, 
    ADDRESS   = {Pittsburgh, Pennsylvania}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2018.XIV.025}