Show simple item record

dc.contributor.advisorDilkina, Bistra
dc.contributor.authorKhalil, Elias
dc.date.accessioned2020-05-20T16:57:08Z
dc.date.available2020-05-20T16:57:08Z
dc.date.created2019-05
dc.date.issued2019-03-28
dc.date.submittedMay 2019
dc.identifier.urihttp://hdl.handle.net/1853/62668
dc.description.abstractDiscrete Optimization algorithms underlie intelligent decision-making in a wide variety of domains. From airline fleet scheduling to data center resource management and matching in ride-sharing services, decisions are often modeled with binary on/off variables that are subject to operational and financial constraints. Branch-and-bound algorithms, as well as heuristics, have been developed to tackle hard discrete optimization problems. Typically, the algorithm designer first identifies structural properties of the problem, then exploits them to solve it. This standard paradigm in algorithm design suffers two main limitations. On the one hand, a good algorithm may be very complex, and thus hard to design manually. On the other hand, in many real-world applications, the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. Without much trial-and-error and domain expertise, it is difficult to tailor optimization algorithms for such a distribution of instances. We show how Machine Learning (ML) can be used to overcome these limitations. In MIP branch-and-bound, we propose to use ML to devise data-driven, input-specific decisions during tree search. Our experimental results show that, for both variable selection and primal heuristic selection, ML approaches can significantly improve the performance of a solver on a variety of instance sets. For settings where optimality guarantees are not of concern, we design Deep Learning approaches for automatically deriving new heuristics that are parametrized as recurrent neural networks. These learned heuristics exploit the properties of the instance distribution, resulting in effective algorithms for various graph optimization problems and general integer programs. This dissertation establishes Machine Learning as a central component of the algorithm design process for discrete optimization, one that complements human ingenuity rather than replace it. This effort has given rise to a variety of theoretical, modeling and practical research questions in ML as it pertains to algorithm design. We also discuss the potential of discrete optimization methods in ML, particularly in the context of Adversarial Attacks on a class of widely used discrete neural networks. As ML models become more pervasive in software systems and automated decision-making, enforcing constraints on their behavior or discovering vulnerabilities therein will necessitate the development of new, scalable constraint reasoning approaches.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectDiscrete optimization
dc.subjectInteger programming
dc.subjectMachine learning
dc.subjectDeep learning
dc.subjectAdversarial machine learning
dc.subjectReinforcement learning
dc.titleTowards tighter integration of machine learning and discrete optimization
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentComputational Science and Engineering
thesis.degree.levelDoctoral
dc.contributor.committeeMemberNemhauser, George L.
dc.contributor.committeeMemberSong, Le
dc.contributor.committeeMemberAhmed, Shabbir
dc.contributor.committeeMemberSanholm, Tuomas
dc.date.updated2020-05-20T16:57:08Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record