Wireless gesture controllers to affect information sonification
MetadataShow full item record
This paper proposes a framework for gestural interaction with information sonification in order to both monitor data aurally and, in addition, to interact with it, transform and even modify the source data in a two-way communication model (Figure 1). Typical data sonification uses automatically generated computational modelling of information, represented in parameters of auditory display, to convey data in an informative representation. It is essentially a one-way data to display process and interpretation by users is usually a passive experience. In contrast, gesture controllers, spatial interaction, gesture recognition hardware and software, are used by musicians and in augmented reality systems to affect, manipulate and perform with sounds. Numerous installation and artistic works arise from motion-generated audio. The framework developed in this paper aims to conflate those technologies into a single environment in which gestural controllers allow interactive participation with the data that is generating the sonification, making use of the parallel between spatial audio and spatial (gestural) interaction. Converging representation and interaction processes bridge a significant gap in current sonification models. A bi-modal generative sonification and visualisation example from the author's sensate laboratory illustrates mappings between socio-spatial human activity and display. The sensor cow project, using wireless gesture controllers fixed to a calf, exemplifies some real time computation and representation issues to convey spatial motion in an easily recognised sonification, suitable for ambient display or intuitive interaction.