An image-based approach for 3D reconstruction of urban scenes using architectural symmetries
MetadataShow full item record
In this dissertation, I focus on an important, generalizable and freely available sub-category of semantic information in addressing modern reconstruction challenges: the notion of symmetry. The emphasis in the 3D modeling of urban scenes has shifted in the past decade. The current goal of the reconstruction community is to provide dense, CAD-like representations of the 3D built environment. In my thesis I focus on five main contributions exploiting symmetry in 3D scenes that advance this agenda: I provide a framework for modeling complex symmetries in 1D, 2D and 3D built upon the mathematical theory of symmetry and lattices, with an emphasis on how this applies to urban scenes. Drawing largely on crystallographic theory, but also considering lattice-free symmetries, my intent was to create a firm basis for computer vision applications in my thesis and beyond. I develop a probabilistic modeling framework based on Bayes' networks that provides a set of generative models for symmetric scenes. Doing so allows us to exploit the physical meaning of the variables that are generating the symmetries to form probabilistic priors. In addition, it serves as the foundational basis of a factor graph framework for optimizing the parameters of symmetry and its subsequent use in 3D reconstruction methods. I provide a novel voting scheme in a polar transformation space to determine the lattice parameters of a symmetric scene that have a lattice-like structure. I demonstrate that my algorithm is more robust to the variations in the quality of the point clouds generated from Structure from Motion (SfM) algorithms, as compared to state of the art techniques that vote in a Cartesian coordinate space. By exploiting the full generative modeling introduced earlier, I show that one can obtain a detailed 3D reconstruction from a single image of a building or structure, while simultaneously determining the camera parameters. I discuss the interaction of the generative model of symmetry in an image projection space and show how one can infer the complex 3D symmetry of the scene using 2D measurements in only a single image, extending previous work in this area to more general symmetries. Finally, I address the topic of joint SfM and symmetry detection. SfM is currently an indispensable tool in computer vision, but it is still not sufficiently agnostic to the type of environment that we are trying to reconstruct. Highly symmetric scenes present one such problem to model SfM methods. I show that instead of being a hindrance, symmetry can be a powerful constraint in providing dense and photo-realistic 3D models of the scene. In particular, I discuss three cases in which I use all the other contributions in this thesis to advancing the state of the art in 3D reconstruction to much more general symmetries than those that are hitherto considered.