Show simple item record

dc.contributor.advisorRehg, James M.
dc.contributor.authorFathi, Alireza
dc.date.accessioned2013-08-29T14:12:06Z
dc.date.available2013-08-29T14:12:06Z
dc.date.created2013-08
dc.date.issued2013-06-13
dc.date.submittedAugust 2013
dc.identifier.urihttp://hdl.handle.net/1853/48738
dc.description.abstractRecent advances in camera technology have made it possible to build a comfortable, wearable system which can capture the scene in front of the user throughout the day. Products based on this technology, such as GoPro and Google Glass, have generated substantial interest. In this thesis, I present my work on egocentric vision, which leverages wearable camera technology and provides a new line of attack on classical computer vision problems such as object categorization and activity recognition. The dominant paradigm for object and activity recognition over the last decade has been based on using the web. In this paradigm, in order to learn a model for an object category like coffee jar, various images of that object type are fetched from the web (e.g. through Google image search), features are extracted and then classifiers are learned. This paradigm has led to great advances in the field and has produced state-of-the-art results for object recognition. However, it has two main shortcomings: a) objects on the web appear in isolation and they miss the context of daily usage; and b) web data does not represent what we see every day. In this thesis, I demonstrate that egocentric vision can address these limitations as an alternative paradigm. I will demonstrate that contextual cues and the actions of a user can be exploited in an egocentric vision system to learn models of objects under very weak supervision. In addition, I will show that measurements of a subject's gaze during object manipulation tasks can provide novel feature representations to support activity recognition. Moving beyond surface-level categorization, I will showcase a method for automatically discovering object state changes during actions, and an approach to building descriptive models of social interactions between groups of individuals. These new capabilities for egocentric video analysis will enable new applications in life logging, elder care, human-robot interaction, developmental screening, augmented reality and social media.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectGaze
dc.subjectSegmentation
dc.subjectEgocentric vision
dc.subjectActivity recognition
dc.subjectObject recognition
dc.subjectAttentional cues
dc.subjectFirst-person vision
dc.subjectDescriptive models
dc.subjectWeakly supervised learning
dc.subjectSocial interactions
dc.subjectHuman object interaction
dc.subjectWearable camera
dc.subject.lcshComputer vision
dc.subject.lcshWearable video devices
dc.titleLearning descriptive models of objects and activities from egocentric video
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputer Science
thesis.degree.levelDoctoral
dc.contributor.committeeMemberBobick, Aaron
dc.contributor.committeeMemberAbowd, Gregory D.
dc.contributor.committeeMemberStarner, Thad
dc.contributor.committeeMemberHebert, Martial
dc.contributor.committeeMemberTorralba, Antonio
dc.date.updated2013-08-29T14:12:07Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record