• Login
    View Item 
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Computational Perception & Robotics
    • View Item
    •   SMARTech Home
    • Institute for Robotics and Intelligent Machines (IRIM)
    • IRIM Articles and Papers
    • Computational Perception & Robotics
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Visual Odometry Priors for robust EKF-SLAM

    Thumbnail
    View/Open
    Alcantarilla10icra1.pdf (769.6Kb)
    Date
    2010
    Author
    Alcantarilla, Pablo F.
    Bergasa, Luis Miguel
    Dellaert, Frank
    Metadata
    Show full item record
    Abstract
    One of the main drawbacks of standard visual EKF-SLAM techniques is the assumption of a general camera motion model. Usually this motion model has been implemented in the literature as a constant linear and angular velocity model. Because of this, most approaches cannot deal with sudden camera movements, causing them to lose accurate camera pose and leading to a corrupted 3D scene map. In this work we propose increasing the robustness of EKF-SLAM techniques by replacing this general motion model with a visual odometry prior, which provides a real-time relative pose prior by tracking many hundreds of features from frame to frame. We perform fast pose estimation using the two-stage RANSAC-based approach from [1]: a two-point algorithm for rotation followed by a one-point algorithm for translation. Then we integrate the estimated relative pose into the prediction step of the EKF. In the measurement update step, we only incorporate a much smaller number of landmarks into the 3D map to maintain real-time operation. Incorporating the visual odometry prior in the EKF process yields better and more robust localization and mapping results when compared to the constant linear and angular velocity model case. Our experimental results, using a stereo camera carried in hand as the only sensor, clearly show the benefits of our method against the standard constant velocity model.
    URI
    http://hdl.handle.net/1853/38308
    Collections
    • Computational Perception & Robotics [213]
    • Computational Perception & Robotics Publications [213]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology