Show simple item record

dc.contributor.authorBrashear, Helene
dc.contributor.authorStarner, Thad
dc.contributor.authorLukowicz, Paul
dc.contributor.authorJunker, Holger
dc.date.accessioned2009-07-17T20:29:47Z
dc.date.available2009-07-17T20:29:47Z
dc.date.issued2003-10
dc.identifier.urihttp://hdl.handle.net/1853/28997
dc.descriptionPresented at the 7th IEEE International Symposium on Wearable Computers (ISWC 2003), White Plains, New York, October 2003.en
dc.description©2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
dc.description.abstractWe build upon a constrained, lab-based Sign Language recognition system with the goal of making it a mobile assistive technology. We examine using multiple sensors for disambiguation of noisy data to improve recognition accuracy. Our experiment compares the results of training a small gesture vocabulary using noisy vision data, accelerometer data and both data sets combined.en
dc.language.isoen_USen
dc.publisherGeorgia Institute of Technologyen
dc.subjectFormulaic languageen
dc.subjectGesture recognitionen
dc.subjectNoisy sensingen
dc.subjectSign language recognition systemen
dc.subjectSpeech recognitionen
dc.subjectWearable computersen
dc.titleUsing Multiple Sensors for Mobile Sign Language Recognitionen
dc.typeProceedingsen
dc.contributor.corporatenameGeorgia Institute of Technology. College of Computing
dc.contributor.corporatenameGeorgia Institute of Technology. Graphics, Visualization and Usability Center
dc.contributor.corporatenameETH - Swiss Federal Institute of Technology. Wearable Computing Laboratory


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record