Show simple item record

dc.contributor.authorSattigeri, Ramachandra Jayanten_US
dc.date.accessioned2007-08-16T17:57:20Z
dc.date.available2007-08-16T17:57:20Z
dc.date.issued2007-05-17en_US
dc.identifier.urihttp://hdl.handle.net/1853/16272
dc.description.abstractThe role of vision as an additional sensing mechanism has received a lot of attention in recent years in the context of autonomous flight applications. Modern Unmanned Aerial Vehicles (UAVs) are equipped with vision sensors because of their light-weight, low-cost characteristics and also their ability to provide a rich variety of information of the environment in which the UAVs are navigating in. The problem of vision based autonomous flight is very difficult and challenging since it requires bringing together concepts from image processing and computer vision, target tracking and state estimation, and flight guidance and control. This thesis focuses on the adaptive state estimation, guidance and control problems involved in vision-based formation flight. Specifically, the thesis presents a composite adaptation approach to the partial state estimation of a class of nonlinear systems with unmodeled dynamics. In this approach, a linear time-varying Kalman filter is the nominal state estimator which is augmented by the output of an adaptive neural network (NN) that is trained with two error signals. The benefit of the proposed approach is in its faster and more accurate adaptation to the modeling errors over a conventional approach. The thesis also presents two approaches to the design of adaptive guidance and control (G&C) laws for line-of-sight formation flight. In the first approach, the guidance and autopilot systems are designed separately and then combined together by assuming time-scale separation. The second approach is based on integrating the guidance and autopilot design process. The developed G&C laws using both approaches are adaptive to unmodeled leader aircraft acceleration and to own aircraft aerodynamic uncertainties. The thesis also presents theoretical justification based on Lyapunov-like stability analysis for integrating the adaptive state estimation and adaptive G&C designs. All the developed designs are validated in nonlinear, 6DOF fixed-wing aircraft simulations. Finally, the thesis presents a decentralized coordination strategy for vision-based multiple-aircraft formation control. In this approach, each aircraft in formation regulates range from up to two nearest neighboring aircraft while simultaneously tracking nominal desired trajectories common to all aircraft and avoiding static obstacles.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectNeural networksen_US
dc.subjectTarget trackingen_US
dc.subjectAdaptive estimationen_US
dc.subjectAdaptive Kalman filtersen_US
dc.subjectIntegrated guidance and controlen_US
dc.subjectAdaptive guidance and controlen_US
dc.subjectAdaptive controlen_US
dc.subjectUnmanned aerial vehiclesen_US
dc.subjectMultiple-vehicle formationen_US
dc.subject.lcshGuidance systems (Flight)en_US
dc.subject.lcshRobot visionen_US
dc.subject.lcshAdaptive control systemsen_US
dc.subject.lcshDrone aircraften_US
dc.subject.lcshFlight controlen_US
dc.titleAdaptive Estimation and Control with Application to Vision-based Autonomous Formation Flighten_US
dc.typeDissertationen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentAerospace Engineeringen_US
dc.description.advisorCommittee Chair: Calise, Anthony; Committee Member: Johnson, Eric; Committee Member: Kim, Byoung Soo; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allenen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record