Localization of Virtual Sound Created Using Individualized and Non-Individualized HRTF for Direct and Reflected Sound
MetadataShow full item record
Good sound localization is an essential factor required for virtual auditory display (VAD) systems. These systems especially those based on the Head-Related Transfer Function (HRTF) often encounter the problem where the locations of virtual sound images are perceived at different locations to those that have been assumed. Considering the fact that reflected sound enhances the reality of virtual space, the accuracy of sound localization in a VAD system might be improved by presenting not only direct but also reflected sound. Therefore, we investigated what effect the presence of a single reflected sound had on the accuracy of the azimuthal localization of a virtual sound image. The results of subjective tests revealed that reflection created using a listener's own HRTF (individualized) is more effective for localizing sound than that created using someone else's HRTF (non-individualized). However, the performance was comparable with cases where only direct sound was presented.