Now showing items 56-59 of 59

    • Using psychoacoustical models for information sonification 

      Ferguson, S.; Cabrera, D.; Beilharz, K.; Song, H. J. (Georgia Institute of TechnologyInternational Community on Auditory Display, 2006-06)
      Psychoacoustical models provide algorithmic methods of estimating the perceptual sensation that will be caused by a given sound stimulus. Four primary psychoacoustical models are most often used: `loudness', `sharpness', ...
    • Vocal sonification of pathologic EEG features 

      Hermann, T.; Baier, G.; Stephani, U.; Ritter, H. (Georgia Institute of TechnologyInternational Community on Auditory Display, 2006-06)
      We introduce a novel approach in EEG data sonification for process monitoring and exploratory as well as comparative data analysis. The approach uses an excitory/articulatory speech model and a particularly selected parameter ...
    • Workplace soundscape mapping: A trial of Macaulay and Crerar's Method 

      McGregor, I.; Crerar, A.; Benyon, D.; Leplatre, G. (Georgia Institute of TechnologyInternational Community on Auditory Display, 2006-06)
      This paper describes a trial of Macaulay and Crerar's method of mapping a workplace soundscape [1] to assess its fitness as a basis for an extended soundscape mapping method. Twelve participants took part within 14 separate ...
    • Xsonify sonification tool for space physics 

      Candey, R. M.; Schertenleib, A. M.; Diaz Merced, W. L. (Georgia Institute of TechnologyInternational Community on Auditory Display, 2006-06)
      xSonify is a concentrated project to extend the space physics data capabilities of the NASA Space Physics Data Facility (SPDF) [1] for use by visually-impaired students and researchers, by developing a sonification data ...