Vision-based navigation and mapping for flight in GPS-denied environments
Wu, Allen David
MetadataShow full item record
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.
Showing items related by title, author, creator and subject.
Wu, Allen D.; Johnson, Eric N.; Kaess, Michael; Dellaert, Frank; Chowdhary, Girish (Georgia Institute of TechnologyAmerican Institute of Aeronautics and Astronautics, 2013-04)A vision-aided inertial navigation system that enables autonomous flight of an aerial vehicle in GPS-denied environments is presented. Particularly, feature point information from a monocular vision sensor are used to ...
De Wagter, Christophe; Proctor, Alison A.; Johnson, Eric N. (Georgia Institute of Technology, 2003-10)Building aircraft with navigation and control systems that can complete flight tasks is complex, and often involves integrating information from multiple sensors to estimate the state of the vehicle. This paper describes ...
Wu, Allen D.; Johnson, Eric N.; Proctor, Alison A. (Georgia Institute of TechnologyAmerican Institute of Aeronautics and Astronautics, Inc., 2005-09)Many onboard navigation systems use the Global Positioning System to bound the errors that result from integrating inertial sensors over time. Global Positioning System information, however, is not always accessible since ...