Using Multiple Sensors for Mobile Sign Language Recognition

View/ Open
Date
2003-10Author
Brashear, Helene
Starner, Thad
Lukowicz, Paul
Junker, Holger
Metadata
Show full item recordAbstract
We build upon a constrained, lab-based Sign Language
recognition system with the goal of making it a mobile assistive
technology. We examine using multiple sensors for disambiguation
of noisy data to improve recognition accuracy.
Our experiment compares the results of training a small
gesture vocabulary using noisy vision data, accelerometer
data and both data sets combined.