Show simple item record

dc.contributor.authorMartens, William L.
dc.contributor.authorSakamoto, Shuichi
dc.contributor.authorSuzuki, Yoiti
dc.date.accessioned2013-12-19T16:42:27Z
dc.date.available2013-12-19T16:42:27Z
dc.date.issued2008-06
dc.identifier.citationProceedings of the 14th International Conference on Auditory Display (ICAD2008), Paris, France. 24-27 June, 2008.International Community for Auditory Display, 2008en_US
dc.identifier.urihttp://hdl.handle.net/1853/49866
dc.descriptionPresented at the 14th International Conference on Auditory Display (ICAD2008) on June 24-27, 2008 in Paris, France.en_US
dc.description.abstractWhen moving sound sources are displayed for a listener in a manner that is consistent with the motion of a listener through an environment populated by stationary sound sources, listeners may perceive that the sources are moving relative to a fixed listening position, rather than experiencing their own self motion (i.e., a change in their listening position). Here, the likelihood of auditory cues producing such self motion (aka auditory-induced vection) can be greatly facilitated by coordinated passive movement of a listener's whole body, which can be achieved when listeners are positioned upon a multi-axis motion platform that is controlled in synchrony with a spatial auditory display. In this study, the temporal synchrony between passive whole-body motion and auditory spatial information was investigated via a multimodal time-order judgment task. For the spatial trajectories taken by sound sources presented here, the observed interaction between passive whole-body motion and sound source motion clearly depended upon the peak velocity reached by the moving sound sources. The results suggest that sensory integration of auditory motion cues with whole-body movement cues can occur over an increasing range of intermodal delays as virtual sound sources are moved increasingly slowly through the space near a listener's position. Furthermore, for the coordinated motion presented in the current study, asynchrony was relatively easy for listeners to tolerate when the peak in whole-body motion occurred earlier in time than the peak in virtual sound source velocity, but quickly grew to be intolerable when the peak in whole-body motion occurred after sound sources reached their peak velocities.en_US
dc.language.isoen_USen_US
dc.publisherInternational Community for Auditory Displayen_US
dc.subjectAuditory displayen_US
dc.subjectVirtual acousticsen_US
dc.subjectMultimodal interactionen_US
dc.titlePerceived Self Motion in Virtual Acoustic Space Facilitated by Passive Whole-Body Movementen_US
dc.typeProceedingsen_US
dc.contributor.corporatenameMcGill University. Schulich School of Music
dc.contributor.corporatenameCenter for Interdisciplinary Research in Music Media and Technology
dc.contributor.corporatenameTohoku University. Research Institute of Electrical Communication and Graduate School of Information Sciences
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record