Variations of Submodularity and Diversity: from Robust Optimization to Markov Chains
MetadataShow full item record
The combinatorial concept of submodular set functions has proved to be a very useful discrete structure for optimization in machine learning and its applications. In this talk, I will show recent work on generalizations and specializations of this structure, and its connections to robustness and efficiency in machine learning. First, generalizations to integer and continuous functions lead to algorithms for solving a special class of nonconvex optimization problems. We show how, with further work, this generalization can be leveraged for introducing robustness to uncertainty in budget allocation and bipartite influence maximization problems. The resulting algorithm solves a nonconvex minimax game. Second, log-submodular discrete probability measures that induce diversity, repulsion and strong notions of negative dependence find applications from randomized matrix approximations and model sketching for large-scale learning to experiment design and interpretable unsupervised learning. But practical sampling methods have hitherto been lagging behind. I will outline how connections to real stable polynomials lead to fast-mixing Markov Chains for practical sampling and to solving an open problem posed by Avron and Boutsidis (2013). This talk is based on joint work with Matthew Staib, Chengtao Li and Suvrit Sra.
- ARC Talks and Events