Self-Synchronous Input For Wearables
MetadataShow full item record
Wearable devices such as smartwatches and Google Glass commonly employ touch interfaces as input modalities. Performing simple tasks, such as accepting a call or dismissing a notification, using touch screens disrupts the natural flow of conversations and using gestures requires users to get familiar with the gesture vocabulary. This work presents SelfSync, a gesture interface that enables users to define and perform actions simultaneously in a subtle manner, using a combination of hand, head and leg gestures, without paying attention to the device. Data from tri-axial gyroscopes of an Android smartphone, an Android smartwatch, and a Google Glass is monitored to recognize the intended gesture with high true-positive accuracy (up to 100% for some gestures) and no false positives for higher thresholds. Data was collected through an in-lab user study and gestures were analyzed on the basis of accuracy, taskload, user preference, social acceptability, and user feedback. After picking two most ideal gestures based on the evaluations, we designed an in-the-wild experiment to further test our interface. Applications and current limitations of the system are further discussed.