• Login
    View Item 
    •   SMARTech Home
    • Georgia Tech Interdisciplinary Research Centers (IRCs)
    • Machine Learning (ML@GT)
    • Machine Learning@Georgia Tech Seminars
    • View Item
    •   SMARTech Home
    • Georgia Tech Interdisciplinary Research Centers (IRCs)
    • Machine Learning (ML@GT)
    • Machine Learning@Georgia Tech Seminars
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Learning to Optimize from Data: Faster, Better, and Guaranteed

    Thumbnail
    View/Open
    zwang.mp4 (541.9Mb)
    zwang_videostream.html (1.323Kb)
    transcript.txt (59.59Kb)
    thumbnail.jpg (63.59Kb)
    Date
    2019-11-20
    Author
    Wang, Zhangyang
    Metadata
    Show full item record
    Abstract
    Learning and optimization are closely related: state-of-the-art learning problems hinge on the sophisticated design of optimizers. On the other hand, the optimization cannot be considered as independent from data, since data may implicitly contain important information that guides optimization, as seen in the recent waves of meta-learning or learning to optimize. This talk will discuss Learning Augmented Optimization (LAO), a nascent area that bridges classical optimization with the latest data-driven learning, by augmenting classical model-based optimization with learning-based components. By adapting their behavior to the properties of the input distribution, the ``augmented'' algorithms may reduce their complexities by magnitudes, and/or improve their accuracy, while still preserving favorable theoretical guarantees such as convergence. I will start by diving into a case study on exploiting deep learning to solve the convex LASSO problem, showing its linear convergence in addition to superior parameter efficiency. Then, our discussions will be extended to applying LAO approaches to solving plug-and-play (PnP) optimization, and population-based optimization. I will next demonstrate our recent results on ensuring the robustness of LAO, say how applicable the algorithm remains to be, if the testing problem instances deviate from the training problem distribution. The talk will be concluded by a few thoughts and reflections, as well as pointers to potential future directions.
    URI
    http://hdl.handle.net/1853/62075
    Collections
    • Machine Learning@Georgia Tech Seminars [52]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology