Show simple item record

dc.contributor.authorFathi, Alireza
dc.contributor.authorLi, Yin
dc.contributor.authorRehg, James M.
dc.date.accessioned2013-03-06T19:30:35Z
dc.date.available2013-03-06T19:30:35Z
dc.date.issued2012-10
dc.identifier.citationFathi, A., Li, Y., & Rehg, J.M. (2012). “Learning to Recognize Daily Actions Using Gaze”. A. Fitzgibbon et al. (Eds.). Computer Vision - ECCV 12th European Conference on Computer Vision (ECCV 2012), 7-13 October 2012. Proceedings, Part I. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, Vol. 7572, pp. 314-327.en_US
dc.identifier.isbn978-3-642-33717-8 (Print)
dc.identifier.isbn978-3-642-33718-5 (Online)
dc.identifier.issn0302-9743
dc.identifier.urihttp://hdl.handle.net/1853/46311
dc.description©2012 Springer-Verlag Berlin Heidelberg. The original publication is available at www.springerlink.comen_US
dc.descriptionDOI: 10.1007/978-3-642-33718-5_23
dc.description.abstractWe present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectDaily activitiesen_US
dc.subjectEye movementen_US
dc.subjectEye trackingen_US
dc.subjectGaze measurementsen_US
dc.subjectHand-eye coordinationen_US
dc.titleLearning to Recognize Daily Actions using Gazeen_US
dc.typeArticleen_US
dc.typeProceedings
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machinesen_US
dc.contributor.corporatenameGeorgia Institute of Technology. College of Computingen_US
dc.identifier.doi10.1007/978-3-642-33718-5_23
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record