• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Semantic representation learning for discourse processing

    Thumbnail
    View/Open
    JI-DISSERTATION-2016.pdf (2.912Mb)
    Date
    2016-07-21
    Author
    Ji, Yangfeng
    Metadata
    Show full item record
    Abstract
    Discourse processing is to identify coherent relations, such as contrast and causal relation, from well-organized texts. The outcomes from discourse processing can benefit both research and applications in natural language processing, such as recognizing the major opinion from a product review, or evaluating the coherence of student writings. Identifying discourse relations from texts is an essential task of discourse processing. Relation identification requires intensive semantic understanding of texts, especially when no word (e.g., but) can signal the relations. Most prior work relies on sparse representation constructed from surface-form features (including, word pairs, POS tags, etc.), which fails to encode enough semantic information. As an alternative, I propose to use distributed representations of texts, which are dense vectors and flexible enough to share information efficiently. The goal of my work is to develop new models with representation learning for discourse processing. Specifically, I present a unified framework in this thesis to be able to learn both distributed representation and discourse models jointly.The joint training not only learns the discourse models, but also helps to shape the distributed representation for the discourse models. Such that, the learned representation could encode necessary semantic information to facilitate the processing tasks. The evaluation shows that our systems outperform prior work with only surface-form representations. In this thesis, I also discuss the possibility of extending the representation learning framework into some other problems in discourse processing. The problems studied include (1) How to use representation learning to build a discourse model with only distant supervision? The investigation of this problem will help to reduce the dependency of discourse processing on the annotated data; (2) How to combine discourse processing with other NLP tasks, such as language modeling? The exploration of this problem is expected to show the value of discourse information, and draw more attention to the research of discourse processing. As the end of this thesis, it also demonstrates the benefit of using discourse information for document-level machine translation and sentiment analysis.
    URI
    http://hdl.handle.net/1853/55636
    Collections
    • College of Computing Theses and Dissertations [1071]
    • Georgia Tech Theses and Dissertations [22398]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology

    • About
    • Terms of Use
    • Contact Us
    • Emergency Information
    • Legal & Privacy Information
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    • Login
    Georgia Tech

    © Georgia Institute of Technology