Learning to Optimize from Data: Faster, Better, and Guaranteed
MetadataShow full item record
Learning and optimization are closely related: state-of-the-art learning problems hinge on the sophisticated design of optimizers. On the other hand, the optimization cannot be considered as independent from data, since data may implicitly contain important information that guides optimization, as seen in the recent waves of meta-learning or learning to optimize. This talk will discuss Learning Augmented Optimization (LAO), a nascent area that bridges classical optimization with the latest data-driven learning, by augmenting classical model-based optimization with learning-based components. By adapting their behavior to the properties of the input distribution, the ``augmented'' algorithms may reduce their complexities by magnitudes, and/or improve their accuracy, while still preserving favorable theoretical guarantees such as convergence. I will start by diving into a case study on exploiting deep learning to solve the convex LASSO problem, showing its linear convergence in addition to superior parameter efficiency. Then, our discussions will be extended to applying LAO approaches to solving plug-and-play (PnP) optimization, and population-based optimization. I will next demonstrate our recent results on ensuring the robustness of LAO, say how applicable the algorithm remains to be, if the testing problem instances deviate from the training problem distribution. The talk will be concluded by a few thoughts and reflections, as well as pointers to potential future directions.