dc.contributor.author | Cassidy, R. J. | |
dc.contributor.author | Berger, J. | |
dc.contributor.author | Lee, K. | |
dc.contributor.author | Maggioni, M. | |
dc.contributor.author | Coifman, R. R. | |
dc.contributor.editor | Brazil, Eoin | en_US |
dc.date.accessioned | 2014-02-02T17:54:25Z | |
dc.date.available | 2014-02-02T17:54:25Z | |
dc.date.issued | 2004-07 | |
dc.identifier.citation | Proceedings of ICAD 04. Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6-9, 2004. Ed. Barrass, S. and Vickers, P. International Community for Auditory Display, 2004. | en_US |
dc.identifier.uri | http://hdl.handle.net/1853/50777 | |
dc.description | Presented at the 10th International Conference on Auditory Display (ICAD2004) | en_US |
dc.description.abstract | The human ability to recognize, identify and compare sounds based on their approximation of particular vowels provides an intuitive, easily learned representation for complex data. We describe implementations of vocal tract models speci cally designed for sonification purposes. The models described are based on classical models including Klatt[1] and Cook[2]. Implementation of these models in MatLab, STK[3], and PD[4] is presented. Various soni cation methods were tested and evaluated using data sets of hyperspectral images of colon cells | en_US |
dc.publisher | Georgia Institute of Technology | en_US |
dc.subject | Auditory display | en_US |
dc.subject | Vocal synthesis | en_US |
dc.title | Auditory display of hyperspectral colon tissue images using vocal synthesis models | en_US |
dc.type | Proceedings | en_US |
dc.contributor.corporatename | Stanford University. The Center for Computer Research in Music and Acoustics | en_US |
dc.contributor.corporatename | Yale University. Department of Mathematics | en_US |
dc.publisher.original | International Community for Auditory Display | en_US |
dc.embargo.terms | null | en_US |