Design and evaluation of a multimodal assistive technology using tongue commands, head movements, and speech recognition for people with tetraplegia
Sahadat, Md Nazmus
MetadataShow full item record
People with high level (C1-C4) spinal cord injury (SCI) cannot use their limbs to do the daily life activities by themselves without assistance. Current assistive technologies (ATs) use remaining capabilities (tongue, muscle, brain, speech, sniffing) as an input method to help them control devices (computer, smartphone). However, these ATs are not very efficient as compared to the gold standards (mouse and keyboards, touch interfaces, joysticks, and so forth) which are being used in everyday life. Therefore, in this work, a novel multimodal assistive system is designed to provide better accessibility more intuitively. The multimodal Tongue Drive System (mTDS) utilizes three key remaining abilities (speech, tongue and head movements) to help people with tetraplegia control the environments such as accessing computers, smartphones or driving wheelchairs. Tongue commands are used as discrete/switch like inputs and head movements as proportional/continuous type inputs, and speech recognition to type texts faster compared to any keyboards to emulate a mouse-keyboard combined system to access computers/ smartphones. Novel signal processing algorithms are developed and implemented in the wearable unit to provide universal access to multiple devices from the wireless mTDS. Non-disabled subjects participated in multiple studies to find the efficacy of mTDS in comparison to gold standards, and people with tetraplegia to evaluate technology learning abilities. Significant improvements are observed in terms of increasing accuracy and speed while doing different computer access and wheelchair mobility tasks. Thus, with sufficient learning of mTDS, it is feasible to reduce the performance gap between a non-disabled and a person with tetraplegia compared to the existing ATs.