Now showing items 1-20 of 27

    • Abstract sound objects to expand the vocabulary of sound design for visual and theatrical media 

      Somers, Eric (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      In this design paper the author explores the kind of sound objects which are typically used in designing sound for theatre and media then proposes to expand the ``vocabulary'' of traditional sound design through the use ...
    • Active sensory tuning for immersive spatialized audio 

      Runkle, Paul; Yendiki, Anastasia; Wakefield, Gregory H (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      Unlike their visual counterparts, immersive spatialized audio displays are highly sensitive to individual differences in the signal processing parameters associated with source placement in azimuth and elevation. We introduce ...
    • An auditory display system for aiding interjoint coordination 

      Ghez, Claude; Dubois, R Luke; Rikakis, Thanassis; Cook, Perry R (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      Patients with lack of proprioception are unable to build and maintain `internal models' of their limbs and monitor their limb movements because these patients do not receive the appropriate information from muscles and ...
    • Auditory-visual cross-modal perception 

      Storms, Russell L (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      The quality of realism in virtual environments is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the virtual environment participant, being human, is multi-modal ...
    • A case study of auditory navigation in virtual acoustic environments 

      Lokki, Tapio; Grohn, Matti; Savioja, Lauri; Takala, Tapio (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      We report results of an auditory navigation experiment. In auditory navigation sound is employed as a navigational aid in a virtual environment. In our experiment, the test task was to find a sound source in a dynamic ...
    • Designing non-speech sounds to support navigation in mobile phone menus 

      Leplatre, Gregory; Brewster, Stephen A (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper describes a framework for integrating non-speech audio to hierarchical menu structures where the visual feedback is limited. In the first part of this paper, emphasis is put on how to extract sound design ...
    • The effect of earcons on reaction times and error-rates in a dual-task vs. a single-task experiment 

      Lemmens, Paul M.C; Bussemakers, Myra P; de Haan, Abraham (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      An experiment with two picture categorization tasks with auditory distracters containing redundant information was carried out to investigate the effects distracters, in this case earcons, have on categorization. In the ...
    • Experiments in computer-assisted annotation of audio 

      Tzanetakis, George; Cook, Perry R (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      Advances in digital storage technology and the wide use of digital audio compression standards like MPEG have made possible the creation of large archives of audio material. In order to work efficiently with these large ...
    • An extensible toolkit for creating virtual sonic environments 

      Fouad, Hesham; Ballas, James A; Brock, Derek (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      The Virtual Audio Server (VAS) is a toolkit designed for exploring problems in the creation of realistic Virtual Sonic Environments (VSE). The architecture of VAS pr
    • Guided by voices: An audio augmented reality system 

      Lyons, Kent; Gandy, Maribeth; Starner, Thad (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper presents an application of a low cost, lightweight audio-only augmented reality infrastructure. The system uses a simple wearable computer and a RF based location system to play digital sounds corresponding to ...
    • Haptic music: Non-musicians collaboratively creating music 

      Gandy, Maribeth; Quay, Andrew (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      The Haptic Music system allows up to three people to collaboratively create music in a jazz style. The users can have varying musical backgrounds, and even a novice can use his/her creativity to create a novel composition. ...
    • Hear & there: An augmented relity system of linked audio 

      Rozier, Joseph; Karahalios, Karrie; Donath, Judith (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper presents an augmented reality system using audio as the primary interface. Using the authoring component of this system, individuals can leave "audio imprints," consisting of several layers of music, sound ...
    • Learning reverberation: Considerations for spatial auditory displays 

      Shinn-Cunningham, Barbara (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      Reverberation has both beneficial and detrimental effects on auditory localization. This paper reviews evidence that listeners adapt to the reverberation in a room. Results show that reverberation degrades perception of ...
    • A model for interaction in exploratory sonification displays 

      Saue, Sigurd (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper presents a general model for sonification of large spatial data sets (e.g. seismic data, medical data) based on ideas from ecological acoustics. The model incorporates not only what we hear (the sounds), but ...
    • Music monitor: Dynamic data display 

      Tran, Quan T; Mynatt, Elizabeth D. (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      In this demo, we present an interface prototype of Music Monitor, an application targeted for home use that dynamically conducts music in real-time to reflect parallel activities in disparate locations (e.g., preparing ...
    • Musical phrase-structured audio communication 

      Hankinson, John CK; Edwards, Alistair DN (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      It has previously been shown that musical grammars can impose structural constraints upon the design of earcons, thereby providing a grammatical basis to earcon combinations. In this paper, more complex structural combinations ...
    • The NAVE: Design and implementation of a 3D audio system for a low cost spatially immersive display 

      Wilson, Jeff; Pair, Jarrell (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      The NAVE is a low cost spatially immersive display system developed by the Georgia Tech Virtual Environments Group. The NAVE audio environment uses two independent speaker systems driven by a dedicated audio PC with two ...
    • Principle curve sonification 

      Hermann, T; Meinicke, P; Ritter, H (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper describes a new approach to render sonifications for high-dimensional data, allowing the user to perceive the ``main'' structure of the data distribution. This is achieved by computing the principal curve of the ...
    • Psychophysical scaling of sonification mappings 

      Walker, Bruce N.; Kramer, Gregory; Lane, David M (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      We determined preferred data-to-display mappings by asking experiment participants directly and then examined the psychophysical scaling functions relating perceived data values to underlying acoustic parameters. Presently, ...
    • A software-based system for interactive spatil sound synthesis 

      Wenzel, Elizabeth M; Miller, Joel D; Abel, Jonathan S (Georgia Institute of TechnologyInternational Community for Auditory Display, 2000-04)
      This paper discusses development issues for a software-based, real-time virtual audio rendering system, Sound Lab (SLAB), designed to work in the personal computer environment using a standard signal-processing library. ...