Enabling In situ & context-based motion gesture design
MetadataShow full item record
Motion gestures, detected through body-worn inertial sensors, are an expressive, fast to access input technique, which is ubiquitously supported by mobile and wearable devices. Recent work on gesture authoring tools has shown that interaction designers can create and evaluate gesture recognizers in stationary and controlled environments. However, we still lack a generalized understanding of their design process and how to enable in situ and context-based motion gesture design. This dissertation advances our understanding of these problems in two ways. First, by characterizing the factors impacting a gesture designer's process, as well as their gesture designs and tools. Second, by demonstrating rapid motion gesture design in a variety of new contexts. Specifically, this dissertation presents: (1) a novel triadic framework that enhances our understanding of the motion gestures, their designers, and the factors influencing design of authoring tools; (2) the first ever explorations of in situ and context-based prototyping of motion gestures through development of two generations of a smartphone-based tool, Mogeste, followed by Comoge; and (3) a description of the challenges and advantages of designing motion gestures in situ, based on the first user study with both professional as well as student interaction designers.