Show simple item record

dc.contributor.authorRavindran, Sourabhen_US
dc.date.accessioned2007-03-27T18:20:14Z
dc.date.available2007-03-27T18:20:14Z
dc.date.issued2006-11-20en_US
dc.identifier.urihttp://hdl.handle.net/1853/14066
dc.description.abstractHuman-like performance by machines in tasks of speech and audio processing has remained an elusive goal. In an attempt to bridge the gap in performance between humans and machines there has been an increased effort to study and model physiological processes. However, the widespread use of biologically inspired features proposed in the past has been hampered mainly by either the lack of robustness across a range of signal-to-noise ratios or the formidable computational costs. In physiological systems, sensor processing occurs in several stages. It is likely the case that signal features and biological processing techniques evolved together and are complementary or well matched. It is precisely for this reason that modeling the feature extraction processes should go hand in hand with modeling of the processes that use these features. This research presents a front-end feature extraction method for audio signals inspired by the human peripheral auditory system. New developments in the field of machine learning are leveraged to build classifiers to maximize the performance gains afforded by these features. The structure of the classification system is similar to what might be expected in physiological processing. Further, the feature extraction and classification algorithms can be efficiently implemented using the low-power cooperative analog-digital signal processing platform. The usefulness of the features is demonstrated for tasks of audio classification, speech versus non-speech discrimination, and speech recognition. The low-power nature of the classification system makes it ideal for use in applications such as hearing aids, hand-held devices, and surveillance through acoustic scene monitoringen_US
dc.format.extent1839761 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectGain adaptationen_US
dc.subjectSpeech processingen_US
dc.subjectAuditory modelingen_US
dc.subjectMachine learningen_US
dc.subjectNoise robustnessen_US
dc.subject.lcshMachine learningen_US
dc.subject.lcshPattern recognition systemsen_US
dc.subject.lcshAutomatic speech recognitionen_US
dc.titlePhysiologically Motivated Methods For Audio Pattern Classificationen_US
dc.typeDissertationen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentElectrical and Computer Engineeringen_US
dc.description.advisorCommittee Chair: David V. Anderson; Committee Member: Chin-Hui Lee; Committee Member: James M. Rehg; Committee Member: Paul E. Hasler; Committee Member: Yucel Altunbasaken_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record