Show simple item record

dc.contributor.advisorStarner, Thad
dc.contributor.authorZafrulla, Zahoor
dc.date.accessioned2015-06-08T18:15:26Z
dc.date.available2015-06-09T05:30:07Z
dc.date.created2014-05
dc.date.issued2014-04-14
dc.date.submittedMay 2014
dc.identifier.urihttp://hdl.handle.net/1853/53461
dc.description.abstractAutomatically recognizing classifier-based grammatical structures of American Sign Language (ASL) is a challenging problem. Classifiers in ASL utilize surrogate hand shapes for people or "classes" of objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and interrogative words like WH-questions (e.g. Who, What, Where, and When). Challenging problems such as recognition of ASL sentences and classifier-based grammatical structures remain relatively unexplored in the field of ASL recognition.  One application of recognition of classifiers is toward creating educational games to help young deaf children acquire language skills. Previous work developed CopyCat, an educational ASL game that requires children to engage in a progressively more difficult expressive signing task as they advance through the game.   We have shown that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. We have demonstrated that the quality of a machine verifier's ability to identify the boundary of the signs can be improved by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, we have shown that we can reduce CopyCat's dependency on custom manufactured hardware by using an off-the-shelf Microsoft Kinect depth camera to achieve similar verification performance. Finally, we show how we can extend our ability to recognize sign language by leveraging depth maps to develop a method using improved hand detection and hand shape classification to recognize selected classifier-based grammatical structures of ASL.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectAmerican sgn Language
dc.subjectHand tracking
dc.subjectSign language recognition
dc.subjectVerification
dc.subjectDepth maps
dc.subjectEducational games
dc.titleAutomatic recognition of American sign language classifiers
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputer Science
dc.embargo.terms2015-05-01
thesis.degree.levelDoctoral
dc.contributor.committeeMemberEssa, Irfan
dc.contributor.committeeMemberHamilton, Harley
dc.contributor.committeeMemberAthitsos, Vassilis
dc.contributor.committeeMemberRehg, James M.
dc.date.updated2015-06-08T18:15:26Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record