• Login
    View Item 
    •   SMARTech Home
    • College of Computing (CoC)
    • Contextual Computing Group
    • Contextual Computing Group Publications
    • View Item
    •   SMARTech Home
    • College of Computing (CoC)
    • Contextual Computing Group
    • Contextual Computing Group Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Using Multiple Sensors for Mobile Sign Language Recognition

    Thumbnail
    View/Open
    iswc2003-sign.pdf (160.4Kb)
    Date
    2003-10
    Author
    Brashear, Helene
    Starner, Thad
    Lukowicz, Paul
    Junker, Holger
    Metadata
    Show full item record
    Abstract
    We build upon a constrained, lab-based Sign Language recognition system with the goal of making it a mobile assistive technology. We examine using multiple sensors for disambiguation of noisy data to improve recognition accuracy. Our experiment compares the results of training a small gesture vocabulary using noisy vision data, accelerometer data and both data sets combined.
    URI
    http://hdl.handle.net/1853/28997
    Collections
    • Contextual Computing Group Publications [6]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology

    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology