dc.contributor.author | Ciptadi, Arridhana | |
dc.contributor.author | Hermans, Tucker | |
dc.contributor.author | Rehg, James M. | |
dc.date.accessioned | 2014-04-11T17:42:06Z | |
dc.date.available | 2014-04-11T17:42:06Z | |
dc.date.issued | 2013-09 | |
dc.identifier.citation | Ciptadi, A.; Hermans, T.; & Rehg, J. M. (2013). "An In Depth View of Saliency". Eds: T. Burghardt, D. Damen, W. Mayol-Cuevas, M. Mirmehdi, In Proceedings of the British Machine Vision Conference (BMVC 2013), 9-13 September 2013, pp. 112.1-112.11. BMVA Press. | en_US |
dc.identifier.uri | http://hdl.handle.net/1853/51587 | |
dc.description | Presented at the 24th British Machine Vision Conference (BMVC 2013), 9-13 September 2013, Bristol, UK. | en_US |
dc.description | © 2013. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. | |
dc.description | DOI: http://dx.doi.org/10.5244/C.27.112 | |
dc.description.abstract | Visual saliency is a computational process that identifies important locations and
structure in the visual field. Most current methods for saliency rely on cues such as
color and texture while ignoring depth information, which is known to be an important
saliency cue in the human cognitive system. We propose a novel computational model of
visual saliency which incorporates depth information. We compare our approach to several state of the art visual saliency methods and we introduce a method for saliency based
segmentation of generic objects. We demonstrate that by explicitly constructing 3D lay-out and shape features from depth measurements, we can obtain better performance than
methods which treat the depth map as just another image channel. Our method requires
no learning and can operate on scenes for which the system has no previous knowledge.
We conduct object segmentation experiments on a new dataset of registered RGB-D images captured on a mobile-manipulator robot. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Georgia Institute of Technology | en_US |
dc.subject | Color | en_US |
dc.subject | Depth information | en_US |
dc.subject | Human cognitive | en_US |
dc.subject | Mobile-manipulator robot | en_US |
dc.subject | Object segmentation | en_US |
dc.subject | Robot manipulation | en_US |
dc.subject | Saliency map | en_US |
dc.subject | Shape features | en_US |
dc.subject | Texture | en_US |
dc.subject | 3D layout | en_US |
dc.subject | Visual saliency | en_US |
dc.title | An In Depth View of Saliency | en_US |
dc.type | Proceedings | en_US |
dc.contributor.corporatename | Georgia Institute of Technology. College of Computing | en_US |
dc.contributor.corporatename | Georgia Institute of Technology. School of Interactive Computing | en_US |
dc.contributor.corporatename | Georgia Institute of Technology. Center for Robotics and Intelligent Machines | en_US |
dc.identifier.doi | 10.5244/C.27.112 | |
dc.embargo.terms | null | en_US |