• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    •   SMARTech Home
    • Georgia Tech Theses and Dissertations
    • Georgia Tech Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    LP and SDP extended formulations: Lower bounds and approximation algorithms

    Thumbnail
    View/Open
    ROY-DISSERTATION-2017.pdf (2.672Mb)
    Date
    2017-05-24
    Author
    Roy, Aurko
    Metadata
    Show full item record
    Abstract
    In this thesis we study various aspects of linear and semidefinite programs including their limitations in approximating various combinatorial optimization problems as well as applications of these paradigms in solving problems in machine learning. In the first part we show inapproximability results for polynomial sized LP and SDPs for several problems. For the class of Constraint Satisfaction Problems (CSPs) it is known that general LPs and SDPs are no more powerful than the Sherali-Adams and Lasserre hierarchies respectively. We show lower bounds for general LPs and SDPs for several non-CSP problems such as Matching, Matching on 3-regular graphs, Independent Set, Vertex Cover, (non-uniform) Sparsest cut and (non-uniform) Balanced Separator. We also obtain the first general SDP inapproximability result for Maximum Cut: any polynomial sized SDP cannot do better than 15/16. We also show that contrary to the situation with CSPs, there are problems such as Independent Set where the Lasserre SDP hierarchy is a suboptimal relaxation compared to even linear sized LP relaxations. The second part of the thesis deals with applications of these paradigms to problems in machine learning. In particular, we study a recent cost function proposed for hierarchical clustering and show how to write an integer program describing the convex hull of this problem. We also show that rounding "spreading metric" type of relaxations leads to improved approximation guarantees for this problem. We also study another classic problem in machine learning, namely reinforcement learning in a robust setting, where the transition matrices corresponding to every action is not known explicitly but may lie in some (convex) uncertainty set and may be chosen adversarially. We give approximation algorithms to compute an (approximate) optimal policy in this setting. The main ingredient of this work is to define "robust" variants of classical Q-iterations and TD-iterations, where the update involves an additional linear optimization step. We prove convergence under appropriate choice of step lengths and discount factor.
    URI
    http://hdl.handle.net/1853/58666
    Collections
    • College of Computing Theses and Dissertations [1156]
    • Georgia Tech Theses and Dissertations [23403]
    • School of Computer Science Theses and Dissertations [79]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology