dc.contributor.author | Beilharz, Kirsty | |
dc.date.accessioned | 2014-01-13T01:41:07Z | |
dc.date.available | 2014-01-13T01:41:07Z | |
dc.date.issued | 2005-07 | |
dc.identifier.citation | Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, July 6-9, 2005. Ed. Eoin Brazil. International Community for Auditory Display, 2005. | en_US |
dc.identifier.uri | http://hdl.handle.net/1853/50198 | |
dc.description | Presented at the 11th International Conference on Auditory Display (ICAD2005) | en_US |
dc.description.abstract | This paper proposes a framework for gestural interaction with information sonification in order to both monitor data aurally and, in addition, to interact with it, transform and even modify the source data in a two-way communication model (Figure 1). Typical data sonification uses automatically generated computational modelling of information, represented in parameters of auditory display, to convey data in an informative representation. It is essentially a one-way data to display process and interpretation by users is usually a passive experience. In contrast, gesture controllers, spatial interaction, gesture recognition hardware and software, are used by musicians and in augmented reality systems to affect, manipulate and perform with sounds. Numerous installation and artistic works arise from motion-generated audio. The framework developed in this paper aims to conflate those technologies into a single environment in which gestural controllers allow interactive participation with the data that is generating the sonification, making use of the parallel between spatial audio and spatial (gestural) interaction. Converging representation and interaction processes bridge a significant gap in current sonification models. A bi-modal generative sonification and visualisation example from the author's sensate laboratory illustrates mappings between socio-spatial human activity and display. The sensor cow project, using wireless gesture controllers fixed to a calf, exemplifies some real time computation and representation issues to convey spatial motion in an easily recognised sonification, suitable for ambient display or intuitive interaction. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Georgia Institute of Technology | en_US |
dc.subject | Auditory display | en_US |
dc.subject | Sonification | en_US |
dc.subject | Wireless gesture control | en_US |
dc.title | Wireless gesture controllers to affect information sonification | en_US |
dc.type | Proceedings | en_US |
dc.contributor.corporatename | University of Sydney. Faculty of Architecture, Key Center for Design Computing and Cognition | en_US |
dc.publisher.original | International Community on Auditory Display | en_US |
dc.embargo.terms | null | en_US |