Show simple item record

dc.contributor.advisorSun, Jimeng
dc.contributor.authorChoi, Edward
dc.date.accessioned2018-08-20T15:35:29Z
dc.date.available2018-08-20T15:35:29Z
dc.date.created2018-08
dc.date.issued2018-05-23
dc.date.submittedAugust 2018
dc.identifier.urihttp://hdl.handle.net/1853/60226
dc.description.abstractDeep learning recently has been showing superior performance in complex domains such as computer vision, audio processing and natural language processing compared to traditional statistical methods. Naturally, deep learning techniques, combined with large electronic health records (EHR) data generated from healthcare organizations have potential to bring dramatic changes to the healthcare industry. However, typical deep learning models can be seen as highly expressive blackboxes, making them difficult to be adopted in real-world healthcare applications due to lack of interpretability. In order for deep learning methods to be readily adopted by real-world clinical practices, they must be interpretable without sacrificing their prediction accuracy. In this thesis, we propose interpretable and accurate deep learning methods for modeling EHR, specifically focusing on longitudinal EHR data. We will be- gin with a direct application of a well-known deep learning algorithm, recurrent neural networks (RNN), to capture the temporal nature of longitudinal EHR. Then, based on the initial approach we develop interpretable deep learning models by focusing on three aspects of computational healthcare: efficient representation learning of medical concepts, code-level interpretation for sequence predictions, and leveraging domain knowledge into the model. Another important aspect that we will address in this thesis is developing a framework for effectively utilizing multiple data sources (e.g. diagnoses, medications, procedures), which can be extended in the future to incorporate wider data modalities such as lab values and clinical notes.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectDeep learning
dc.subjectHealthcare
dc.titleDoctor AI: Interpretable deep learning for modeling electronic health records
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputational Science and Engineering
thesis.degree.levelDoctoral
dc.contributor.committeeMemberDuke, Jon
dc.contributor.committeeMemberEisenstein, Jacob
dc.contributor.committeeMemberRehg, James
dc.contributor.committeeMemberStewart, Walter F.
dc.date.updated2018-08-20T15:35:29Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record