Towards tighter integration of machine learning and discrete optimization
MetadataShow full item record
Discrete Optimization algorithms underlie intelligent decision-making in a wide variety of domains. From airline fleet scheduling to data center resource management and matching in ride-sharing services, decisions are often modeled with binary on/off variables that are subject to operational and financial constraints. Branch-and-bound algorithms, as well as heuristics, have been developed to tackle hard discrete optimization problems. Typically, the algorithm designer first identifies structural properties of the problem, then exploits them to solve it. This standard paradigm in algorithm design suffers two main limitations. On the one hand, a good algorithm may be very complex, and thus hard to design manually. On the other hand, in many real-world applications, the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. Without much trial-and-error and domain expertise, it is difficult to tailor optimization algorithms for such a distribution of instances. We show how Machine Learning (ML) can be used to overcome these limitations. In MIP branch-and-bound, we propose to use ML to devise data-driven, input-specific decisions during tree search. Our experimental results show that, for both variable selection and primal heuristic selection, ML approaches can significantly improve the performance of a solver on a variety of instance sets. For settings where optimality guarantees are not of concern, we design Deep Learning approaches for automatically deriving new heuristics that are parametrized as recurrent neural networks. These learned heuristics exploit the properties of the instance distribution, resulting in effective algorithms for various graph optimization problems and general integer programs. This dissertation establishes Machine Learning as a central component of the algorithm design process for discrete optimization, one that complements human ingenuity rather than replace it. This effort has given rise to a variety of theoretical, modeling and practical research questions in ML as it pertains to algorithm design. We also discuss the potential of discrete optimization methods in ML, particularly in the context of Adversarial Attacks on a class of widely used discrete neural networks. As ML models become more pervasive in software systems and automated decision-making, enforcing constraints on their behavior or discovering vulnerabilities therein will necessitate the development of new, scalable constraint reasoning approaches.
Showing items related by title, author, creator and subject.
Mehta, Nishant A. (Georgia Institute of Technology, 2013-05-15)Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it ...
Berlind, Christopher (Georgia Institute of Technology, 2015-07-22)Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in the modern world has made unlabeled data much easier to ...
Schroecker, Yannick Karl Daniel (Georgia Institute of Technology, 2020-03-16)Imitation learning has emerged as one of the most effective approaches to train agents to act intelligently in unstructured and unknown domains. On its own or in combination with reinforcement learning, it enables agents ...