Show simple item record

dc.contributor.authorCiptadi, Arridhana
dc.contributor.authorHermans, Tucker
dc.contributor.authorRehg, James M.
dc.date.accessioned2014-04-11T17:42:06Z
dc.date.available2014-04-11T17:42:06Z
dc.date.issued2013-09
dc.identifier.citationCiptadi, A.; Hermans, T.; & Rehg, J. M. (2013). "An In Depth View of Saliency". Eds: T. Burghardt, D. Damen, W. Mayol-Cuevas, M. Mirmehdi, In Proceedings of the British Machine Vision Conference (BMVC 2013), 9-13 September 2013, pp. 112.1-112.11. BMVA Press.en_US
dc.identifier.urihttp://hdl.handle.net/1853/51587
dc.descriptionPresented at the 24th British Machine Vision Conference (BMVC 2013), 9-13 September 2013, Bristol, UK.en_US
dc.description© 2013. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
dc.descriptionDOI: http://dx.doi.org/10.5244/C.27.112
dc.description.abstractVisual saliency is a computational process that identifies important locations and structure in the visual field. Most current methods for saliency rely on cues such as color and texture while ignoring depth information, which is known to be an important saliency cue in the human cognitive system. We propose a novel computational model of visual saliency which incorporates depth information. We compare our approach to several state of the art visual saliency methods and we introduce a method for saliency based segmentation of generic objects. We demonstrate that by explicitly constructing 3D lay-out and shape features from depth measurements, we can obtain better performance than methods which treat the depth map as just another image channel. Our method requires no learning and can operate on scenes for which the system has no previous knowledge. We conduct object segmentation experiments on a new dataset of registered RGB-D images captured on a mobile-manipulator robot.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectColoren_US
dc.subjectDepth informationen_US
dc.subjectHuman cognitiveen_US
dc.subjectMobile-manipulator roboten_US
dc.subjectObject segmentationen_US
dc.subjectRobot manipulationen_US
dc.subjectSaliency mapen_US
dc.subjectShape featuresen_US
dc.subjectTextureen_US
dc.subject3D layouten_US
dc.subjectVisual saliencyen_US
dc.titleAn In Depth View of Saliencyen_US
dc.typeProceedingsen_US
dc.contributor.corporatenameGeorgia Institute of Technology. College of Computingen_US
dc.contributor.corporatenameGeorgia Institute of Technology. School of Interactive Computingen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machinesen_US
dc.identifier.doi10.5244/C.27.112
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record