1994 Proceedings of the International Conference on Auditory Display. The annual ICAD conference is the prime forum for academia and industry to exchange ideas and discuss developments in the field of auditory display.

Recent Submissions

  • Voice annotation of visual representations in computer-mediated collaborative learning 

    Steeples, Christine (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    New computer-based communications technologies-such as electronic conferencing, electronic mail, communal hypertext, and communal hypermedia databases-make it possible for people to collaborate in their learning, even when ...
  • Using virtual environment technology to present a digital sound library 

    Whitehead, John F (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Digital sound libraries of today can be difficult to navigate. For example, sampling keyboards often display only the currently selected sound, and it is not easy to know the adjacent sounds or how to find desired ones. ...
  • Using audio windows to analyze music 

    Cohen, Michael (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Alternative nonimmersive perspectives enable new paradigms of perception, especially in the context of frames-of-reference for musical audition and groupware. "maw," acronymic for multidimensional audio windows, is an ...
  • Using additive sound synthesis to analyze simplicial complexes 

    Axen, Ulrike; Choi, Insook (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    We present a new technique for traversing simplicia1 complexes and producing sounds from the output of this traversal. The traversal algorithm was invented in order to extract temporal information from static geometric ...
  • The importance of head movements for localizing virtual auditory display objects 

    Wightman, Frederic L; Kistler, Doris J (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    In most of our research we produce virtual sound sources by filtering stimuli with head-related transfer functions (HRTF's) measured from discrete source positions and present the stimuli to listeners via headphones. With ...
  • Task-oriented quantitative testing for synthesized 3-D auditory displays 

    Julig, Louise Frantzen; Kaiwi, Jerry L. (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Current human machine interfaces in Navy systems which incorporate headphone listening fail to take full advantage of human binaural sensory processing capabilities. These interfaces can be improved by providing the ...
  • The run-time components of sonnett 

    Jameson, David H (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Sonnet is an audio-enhanced monitoring and debugging system whose aim is to investigate how sound can be used in a program development environment. Running under AIX, it consists of a visual programming language to design ...
  • The 'GUIB' spatial auditory display - generation of an audio-based interface for blind computer users 

    Crispien, Kai; Petrie, Helen (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    In order to provide access to graphical-based user interfaces (GUI's) for blind computer users, a screen-reader program system is under development which conveys the GUI into auditive and/or tactile form. The spatial ...
  • The varcese system, and satellite-ground control: Using auditory icons and sonification in a complex, supervisory control system 

    Albers, Michael C (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
  • Sound localization in varying virtual acoustic environments 

    Zahorik, Pavel; Kistler, Doris J; Wightman, Frederic L (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    {Localization performance was examined in three types of headphone-presented virtual acoustic environments: an anechoic virtual environment, an echoic virtual environment, and an echoic virtual environment for which the ...
  • System for psychometric testing of auditory representations of scientific data 

    Smith, Stuart; Levkowitz, Haim; Pickett, Ronald M.; Torpey, Mark (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    This chapter describes a computing environment with which an investigator can interactively design and test an auditory data display. We describe the components of our system and how they are used, and we give some preliminary ...
  • Sound synthesis and composition applying time scaling to observing chaotic systems 

    Choi, Insook (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    This chapter presents a working model for bringing computational models to cognitive process by designing auditory signals for the models. Auditory structure is defined as an observed structure which is meaningful with ...
  • Perception of virtual auditory shapes 

    Hollander, Ari J.; Furness III, Thomas A. (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Virtual environments may help us to understand and eventually extend the domain of auditory perception. The authors performed two experiments which verified the ability of subjects to recognize geometric shapes and ...
  • On the role of head-related transfer function spectral notches in the judgement of sound source elevation 

    Macpherson, Ewan A. (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Using a simple model of sound source elevation judgment, an attempt was made to predict two aspects of listeners' localization behavior from measurements of the positions of the primary high-frequency notch in their ...
  • Out to lunch: Further adventures monitoring background activity 

    Cohen, Jonathan (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    The sound of keystrokes and mouse clicks generated by a number of computer users gives coworkers a sense of "group awarenessn-a feeling that other people are nearby and an impression of how busy they are. L'OutToLunch," a ...
  • MagicMikes - multiple aerial probes for sonification of spatial datasets 

    Grohn, Matti; Takala, Tapio (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    We present a method for sonification of data sets located in spatial environments in such a way that both individual point sources and overviews of larger data areas can be observed.
  • Measuring HRTFs in a reflective environment 

    Abel, Jonathan S; Foster, Scott (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Virtual audio displays control the apparent location of sound sources by applying left-ear and right-ear filters designed to mimic the acoustic properties of the human torso, head, and pinnae. These filters, called ...
  • LSL: A specification language for program auralization 

    Mathur, Aditya P.; Boardman, David B.; Khandelwal, Vivek (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    The need for specification of sound patterns to be played during program execution arises in contexts where program auralization is useful. We present a language named LSL (Listen Speciikation Language) designed for ...
  • Integrating speech and nonspeech sounds in interfaces for blind users 

    Pitt, Ian (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Synthetic speech is widely used by blind people to enable them to receive textual output from computer systems. Used appropriately, speech provides a reasonably high level of access to command-line systems such as DOS and ...
  • GUI admission for VIPs: A sound initiative 

    Poll, L.H.D; Eggen, J.H. (Georgia Institute of TechnologyInternational Community on Auditory Display, 1994-11)
    Until now, mostly character-based applications were used on office computers. These applications could be accessed by visually impaired persons(VIPs) using Braille or synthetic speech. Recently, there has been a shift ...

View more