Show simple item record

dc.contributor.authorZotkin, Dmitry N.
dc.contributor.authorDuraiswami, R.
dc.contributor.authorDavis, L. S.
dc.date.accessioned2014-03-17T19:57:23Z
dc.date.available2014-03-17T19:57:23Z
dc.date.issued2002-07en_US
dc.identifier.urihttp://hdl.handle.net/1853/51348
dc.descriptionPresented at the 8th International Conference on Auditory Display (ICAD), Kyoto, Japan, July 2-5, 2002.en_US
dc.description.abstractHigh-quality virtual audio scene rendering is a must for emerging virtual and augmented reality applications, for perceptual user interfaces,and sonification of data. Personalization of HRTF is necessary in applications where perceptual realism and correct elevation perception is critical. We describe algorithms for creation of virtual auditory spaces by rendering cues that arise from anatomical scattering, environmental scattering, and dynamical effects. We use a novel way of personalizing the head related transfer functions (HRTFs) from a database, based on anatomical measurements.Details of algorithms for HRTF interpolation, room impulse response creation, HRTF selection from a database, and audio scene presentation are presented. Our system runs in real time on an office PC without specialized DSP hardware.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectAuditory displayen_US
dc.subjectCustomizableen_US
dc.titleCustomizable auditory displaysen_US
dc.typeProceedingsen_US
dc.contributor.corporatenameUniversity of Marylanden_US
dc.publisher.originalInternational Community on Auditory Displayen_US
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record