• Login
    View Item 
    •   SMARTech Home
    • International Conference on Auditory Display (ICAD)
    • International Conference on Auditory Display, 2007
    • View Item
    •   SMARTech Home
    • International Conference on Auditory Display (ICAD)
    • International Conference on Auditory Display, 2007
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Modeling and Continuous Sonification of Affordances for Gesture-Based Interfaces

    Thumbnail
    View/Open
    VisellCooperstock2007.pdf (961.0Kb)
    Date
    2007-06
    Author
    Visell, Yon
    Cooperstock, Jeremy
    Metadata
    Show full item record
    Abstract
    Sonification can play a significant role in facilitating continuous, gesture-based input in closed loop human computer interaction, where it offers the potential to improve the experience of users, making systems easier to use by rendering their inferences more transparent. The interactive system described here provides a number of gestural affordances which may not be apparent to the user through a visual display or other cues, and provides novel means for navigating them with sound or vibrotactile feedback.The approach combines machine learning techniques for understanding a user's gestures, with a method for the auditory display of salient features of the underlying inference process in real time. It uses a particle filter to track multiple hypotheses about a user's input as the latter is unfolding, together with Dynamic Movement Primitives, introduced in work by Schaal et al [1][2], which model a user's gesture as evidence of a nonlinear dynamical system that has given rise to them. The sonification is based upon a presentation of features derived from estimates of the time varying probability that the user's gesture conforms to state trajectories through the ensemble of dynamical systems. We propose mapping constraints for the sonification of time-dependent sampled probability densities. The system is being initially assessed with trial tasks such as a figure reproduction using a multi degree-of-freedom wireless pointing input device, and a handwriting interface.
    URI
    http://hdl.handle.net/1853/50016
    Collections
    • International Conference on Auditory Display, 2007 [77]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology

    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology