Show simple item record

dc.contributor.authorRoberts, Richard
dc.contributor.authorPotthast, Christian
dc.contributor.authorDellaert, Frank
dc.date.accessioned2011-03-30T20:49:23Z
dc.date.available2011-03-30T20:49:23Z
dc.date.issued2009
dc.identifier.citationRoberts, R., Potthast, C., & Dellaert, F. (2009). “Learning General Optical Flow Subspaces for Egomotion Estimation and Detection of Motion Anomalies". Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, 57-64.en_US
dc.identifier.issn1063-6919
dc.identifier.urihttp://hdl.handle.net/1853/38341
dc.description©2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en_US
dc.descriptionPresented at the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 20-25 June 2009, Miami, FL.
dc.descriptionDOI: 10.1109/CVPR.2009.5206538
dc.description.abstractThis paper deals with estimation of dense optical flow and ego-motion in a generalized imaging system by exploiting probabilistic linear subspace constraints on the flow. We deal with the extended motion of the imaging system through an environment that we assume to have some degree of statistical regularity. For example, in autonomous ground vehicles the structure of the environment around the vehicle is far from arbitrary, and the depth at each pixel is often approximately constant. The subspace constraints hold not only for perspective cameras, but in fact for a very general class of imaging systems, including catadioptric and multiple-view systems. Using minimal assumptions about the imaging system, we learn a probabilistic subspace constraint that captures the statistical regularity of the scene geometry relative to an imaging system. We propose an extension to probabilistic PCA (Tipping and Bishop, 1999) as a way to robustly learn this subspace from recorded imagery, and demonstrate its use in conjunction with a sparse optical flow algorithm. To deal with the sparseness of the input flow, we use a generative model to estimate the subspace using only the observed flow measurements. Additionally, to identify and cope with image regions that violate subspace constraints, such as moving objects, objects that violate the depth regularity, or gross flow estimation errors, we employ a per-pixel Gaussian mixture outlier process. We demonstrate results of finding the optical flow subspaces and employing them to estimate dense flow and to recover camera motion for a variety of imaging systems in several different environments.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectEgomotion estimationen_US
dc.subjectImagesen_US
dc.subjectLinear subspaceen_US
dc.subjectOptical flow estimationen_US
dc.subjectProbabilistic PCAen_US
dc.titleLearning General Optical Flow Subspaces for Egomotion Estimation and Detection of Motion Anomaliesen_US
dc.typePost-printen_US
dc.typeProceedings
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machines
dc.contributor.corporatenameGeorgia Institute of Technology. College of Computing
dc.publisher.originalInstitute of Electrical and Electronics Engineers


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record