Show simple item record

dc.contributor.authorRecht, Benjamin
dc.date.accessioned2018-11-26T20:15:04Z
dc.date.available2018-11-26T20:15:04Z
dc.date.issued2018-11-14
dc.identifier.urihttp://hdl.handle.net/1853/60560
dc.descriptionPresented on November 14, 2018 at 12:15 p.m. in the Marcus Nanotechnology Building, Room 1116.en_US
dc.descriptionBenjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was previously an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. Ben received his B.S. in Mathematics from the University of Chicago, and received a M.S. and PhD from the MIT Media Laboratory. After completing his doctoral work, he was a postdoctoral fellow in the Center for the Mathematics of Information at Caltech.en_US
dc.descriptionRuntime: 59:52 minutesen_US
dc.description.abstractGiven the dramatic successes in machine learning and reinforcement learning over the past half decade, there has been a surge of interest in applying these techniques to continuous control problems in robotics and autonomous vehicles. Though such control applications appear to be straightforward generalizations of standard reinforcement learning, few fundamental baselines have been established prescribing how well one must know a system in order to control it. In this talk, I will discuss how one might merge techniques from statistical learning theory with robust control to derive such baselines for such continuous control. I will explore several examples that balance parameter identification against controller design and demonstrate finite sample tradeoffs between estimation fidelity and desired control performance. I will describe how these simple baselines give us insights into shortcomings of existing reinforcement learning methodology. I will close by listing several exciting open problems that must be solved before we can build robust, safe learning systems that interact with an uncertain physical environment.en_US
dc.format.extent59:52 minutes
dc.language.isoen_USen_US
dc.relation.ispartofseriesMachine Learning@Georgia Tech Seminar Seriesen_US
dc.subjectControlen_US
dc.subjectMachine learningen_US
dc.subjectReinforcement learningen_US
dc.titleThe Statistical Foundations of Learning to Controlen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Machine Learningen_US
dc.contributor.corporatenameUniversity of California, Berkeley. Dept. of Electrical Engineering and Computer Sciencesen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record