Show simple item record

dc.contributor.authorCassidy, R. J.
dc.contributor.authorBerger, J.
dc.contributor.authorLee, K.
dc.contributor.authorMaggioni, M.
dc.contributor.authorCoifman, R. R.
dc.contributor.editorBrazil, Eoinen_US
dc.date.accessioned2014-02-02T17:54:25Z
dc.date.available2014-02-02T17:54:25Z
dc.date.issued2004-07
dc.identifier.citationProceedings of ICAD 04. Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6-9, 2004. Ed. Barrass, S. and Vickers, P. International Community for Auditory Display, 2004.en_US
dc.identifier.urihttp://hdl.handle.net/1853/50777
dc.descriptionPresented at the 10th International Conference on Auditory Display (ICAD2004)en_US
dc.description.abstractThe human ability to recognize, identify and compare sounds based on their approximation of particular vowels provides an intuitive, easily learned representation for complex data. We describe implementations of vocal tract models speci cally designed for sonification purposes. The models described are based on classical models including Klatt[1] and Cook[2]. Implementation of these models in MatLab, STK[3], and PD[4] is presented. Various soni cation methods were tested and evaluated using data sets of hyperspectral images of colon cellsen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectAuditory displayen_US
dc.subjectVocal synthesisen_US
dc.titleAuditory display of hyperspectral colon tissue images using vocal synthesis modelsen_US
dc.typeProceedingsen_US
dc.contributor.corporatenameStanford University. The Center for Computer Research in Music and Acousticsen_US
dc.contributor.corporatenameYale University. Department of Mathematicsen_US
dc.publisher.originalInternational Community for Auditory Displayen_US
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record