Sonification of 3D Scenes Using Personalized Spatial Audio to Aid Visually Impaired Persons
MetadataShow full item record
The research presented concerns the development of a sonification algorithm for representation of 3D scenes for use in an electronic travel aid (ETA) for visually impaired persons. The proposed sonar-like algorithm utilizes segmented 3D scene images, personalized spatial audio and musical sound patterns. The use of segmented and parametrically described 3D scenes allowed to overcome the large sensory mismatch between visual and auditory perception. Utilization of individually measured head related transfer functions (HRTFs), enabled the application of illusions of virtual sound sources. The selection of sounds used was based on surveys with blind volunteers. A number of sonification schemes, dubbed sound codes, were proposed, assigning sound parameters to segmented object parameters. The sonification algorithm was tested in virtual reality using software simulation along with studies of virtual sound source localization accuracy. Afterwards, trials in controlled real environments were made using a portable ETA prototype, with participation of both blind and sighted volunteers. Successful trials demonstrated that it is possible to quickly learn and efficiently use the proposed sonification algorithm to aid spatial orientation and obstacle avoidance.