Enabling one-handed input for wearable computing
MetadataShow full item record
A new evolution of computing is emerging around wearable technologies. Wearable computing has been a topic of research for years. However, we are beginning to see adoption by consumers and non-researchers due to advances in embedded and mobile software systems, low-power microprocessor design, wireless technologies, and low-cost sensors. There are still a number of open research challenges in wearable computing, from providing continuous battery power, simplifying on-body networking, addressing privacy and social issues, to designing the interaction experience. Traditional desktop and mobile input technologies, such as mice, keyboards, and in some cases touchscreens are no longer suitable for wearable computing scenarios. In its most common embodiment, wearable computing today relies on a very restricted set of input and output modalities, making this an exciting research area with opportunities for innovation. The goal of my work is to envision new user experiences and enhance the richness and quality of input modalities available to mobile and wearable computer systems. In this dissertation, I articulate an alternative approach to interaction with computing systems that is specifically focused on wearable, one-handed input techniques. I utilize the smartwatch as a platform of choice for sensing and computation. However, these techniques may also be embedded into other types of wrist-worn devices such as bracelets or fitness bands. The interaction techniques I describe in this dissertation are designed purposefully to eliminate the need to directly interact with the on-wrist device. I take advantage of the dexterity of the arm, hand and fingers around the device for gestural interactions. I also leverage the malleability of humans' vocal resonant system to produce non-voice acoustic sounds as input when bringing the device close to the mouth. In summary, I present research work in the design, implementation, and evaluation of three systems: (1) a gloveless, inertial-based technique that combines the smartwatch with sensors mounted on the thumb to sense wrist and thumb movements and enable a broad set of finger-level gestures; (2) a one-handed interaction technique that tracks the synchronous and rhythmic extension and reposition of the user's thumb (augmented with a passive magnetic ring) through correlation with on-screen blinking controls and without requiring calibration; and (3) an interaction technique that allows a person to control their smartwatch through non-voice acoustics detected using the device microphone and machine learning.