• Login
    View Item 
    •   SMARTech Home
    • International Conference on Auditory Display (ICAD)
    • International Conference on Auditory Display, 2001
    • View Item
    •   SMARTech Home
    • International Conference on Auditory Display (ICAD)
    • International Conference on Auditory Display, 2001
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Distance-based speech segregation in near-field virtual audio displays

    Thumbnail
    View/Open
    BrungartSimpson2001.pdf (143.2Kb)
    Date
    2001-07
    Author
    Brungart, Douglas S
    Simpson, Brian D
    Metadata
    Show full item record
    Abstract
    In tasks that require listeners to monitor two or more simultaneous talkers, substantial performance benefits can be achieved by spatially separating the competing speech messages with a virtual audio display. Although the advantages of spatial separation in azimuth are well documented, little is known about the performance benefits that can be achieved when competing speech signals are presented at different distances in the near field. In this experiment, head-related transfer functions (HRTFs) measured with a KEMAR manikin were used to simulate competing sound sources at distances ranging from 12 cm to 1 m along the interaural axis of the listener. One of the sound sources (the target) was a phrase from the Coordinate Response Measure (CRM) speech corpus, and the other sound source (the masker) was either a competing speech phrase from the CRM speech corpus or a speech-shaped noise signal. When speech-shaped noise was used as the masker, the intelligibility of the target phrase increased substantially only when the spatial separation in distance resulted in an improvement in signal-to-noise ratio (SNR) at one of the two ears. When a competing speech phrase was used as the masker, spatial separation in distance resulted in substantial improvements in the intelligibility of the target phrase even when the overall levels of the signals were normalized to eliminate any SNR advantages in the better ear, suggesting that binaural processing plays an important role in the segregation of competing speech messages in the near field. The results have important implications for the design of audio displays with multiple speech communication channels.
    URI
    http://hdl.handle.net/1853/50613
    Collections
    • International Conference on Auditory Display, 2001 [46]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology

    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology