Show simple item record

dc.contributor.authorRabbat, Mike
dc.date.accessioned2018-04-09T19:36:24Z
dc.date.available2018-04-09T19:36:24Z
dc.date.issued2018-04-04
dc.identifier.urihttp://hdl.handle.net/1853/59516
dc.descriptionPresented on April 4, 2018 at 12:00 p.m. in the Marcus Nanotechnology Building, Room 1116.en_US
dc.descriptionMike Rabbat is a Research Scientist in the Facebook AI Research group. He is currently on leave from McGill University where he is an Associate Professor of Electrical and Computer Engineering. Mike’s research interests are in the areas of networks, statistical signal processing, and machine learning. Currently, he is working on gossip algorithms for distributed processing, distributed tracking, and algorithms and theory for signal processing on graphs.en_US
dc.descriptionRuntime: 61:51 minutesen_US
dc.description.abstractWe consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents' local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size. This is joint work with Mahmoud Assran.en_US
dc.format.extent61:51 minutes
dc.language.isoen_USen_US
dc.relation.ispartofseriesMachine Learning @ Georgia Tech (ML@GT) Seminaren_US
dc.subjectDistributed optimizationen_US
dc.subjectGossip algorithmsen_US
dc.subjectMulti-agent optimizationen_US
dc.titleAsynchronous (Sub)gradient-Pushen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Machine Learningen_US
dc.contributor.corporatenameFacebook AI Research (FAIR)en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record