Thin Lens-Based Geometric Surface Inversion for Multiview Stereo
Friedlander, Robert Daniel
MetadataShow full item record
Current state-of-the-art multiview reconstruction methods are founded on a pinhole camera model that assumes perfectly focused images and thus fail when given defocused image data. To overcome this, a fully generative algorithm for the reconstruction of dense three-dimensional shapes under varying viewpoints and levels of focus is developed using a thin lens model which is able to accurately model defocus blur in images. While easily stated, this requires a significant mathematical reformulation from the bottom up as the simple perspective projection assumed by the pinhole model and utilized by current methods no longer applies under the more general thin lens model. New expressions for the forward modeling of image formation as well as model inversion are developed. For the former, image irradiance is related to scene radiance using energy conservation, and the resulting integral expression has a closed-form solution for in-focus points that is shown to be more general and accurate than the one used in current methods. For the latter, the sensitivities of image irradiance to perturbations in both the scene radiance and geometry are analyzed, and the necessary gradient descent evolution equations are extracted from these sensitivities. A variational surface evolution algorithm is then formed where image estimates generated by the thin lens forward model are compared to the actual measured images, and the resulting pixel-wise error is then fed into the evolution equations to update the surface shape and scene radiance estimates. This algorithm is experimentally validated for the case of piecewise-constant scene radiance on both computer-generated and real images, and it is seen that this new method is able to accurately reconstruct sharp object features from even severely defocused images and has an increased robustness to noise compared to pinhole-based methods.