Spotty: Imaging sonification based on spot-mapping and tonal volume
Abstract
A basic question at image sonification is the image segmentation. A cognitive model of visual processing in a greater degree could define possible ways of sound mapping. For instance, the scanpath theory suggests that a top-down internal cognitive model, of what we see, drives the sequences of rapid eye movements and fixations or glances that so efficiently travel over scene or picture of interest. The scanpath theory may be applied at sonification of visual image. But it is necessary to solve, what is more important in each stage of the image recognition process: the scan trajectory or the optical characteristics of its extreme positions? That is to say, what is dominant - scanpath or the spot of glance? I hope a solution of these questions will allow to develop new tools for VR applications as well as to continue designing of visualization system for blind people on a basis of blind-eye tracking.