Show simple item record

dc.contributor.authorvan de Geer, Sara
dc.date.accessioned2018-09-11T19:06:19Z
dc.date.available2018-09-11T19:06:19Z
dc.date.issued2018-08-31
dc.identifier.urihttp://hdl.handle.net/1853/60424
dc.identifier.urihttp://hdl.handle.net/1853/60426
dc.identifier.urihttp://hdl.handle.net/1853/60427
dc.descriptionPresented on August 31, 2018 from 2:00 p.m.-3:00 p.m. at the Georgia Institute of Technology (Georgia Tech).en_US
dc.descriptionTransdisciplinary Research Institute for Advancing Data Science (TRIAD) Distinguished Lecture Series: Sparsity, Oracles and Inference in High Dimensional Statistics - Part 1.en_US
dc.descriptionPart 2: http://hdl.handle.net/1853/60426 Part 3: http://hdl.handle.net/1853/60427en_US
dc.descriptionSara van de Geer has been Full Professor at the Seminar for Statistics at ETH Zurich since September 2005. Her main field of research is mathematical statistics, with special interest in high-dimensional problems. Focus points are: empirical processes, curve estimation, machine learning, model selection, and non- and semiparametric statistics. She is associate editor of Probability Theory and Related Fields, Journal of the European Mathematical Society, Scandinavian Journal of Statistics, Journal of Machine Learning Research, Statistical Surveys and Journal of Statistical Planning and Inference. She is a member of the Research Council of The Swiss National Science Foundation. She is a member of the International Statistical Institute and fellow of the Institute of Mathematical Statistics. She is correspondent of the Royal Dutch Academy of Sciences and member of Leopoldina German National Academy of Sciences. She is President of the Bernoulli Society.en_US
dc.descriptionRuntime: 58:18 minutes
dc.description.abstractThere will be three lectures, which in principle will be independent units. Their common theme is exploiting sparsity in high-dimensional statistics. Sparsity means that the statistical model is allowed to have quite a few parameters, but that it is believed that most of these parameters are actually not relevant. We let the data themselves decide which parameters to keep by applying a regularization method. The aim is then to derive so-called sparsity oracle inequalities. In the first lecture, we consider a statistical procedure called M-estimation. "M" stands here for "minimum": one tries to minimize a risk function, in order to obtain the best fit to the data. Lease squares is a prominent example. Regularization is done by adding a sparsity inducing penalty that discourages too good a fit to the data. An example is the l₁-penalty which together with least squares gives to an estimation procedure called the Lasso. We address the question: why does the l₁-penalty lead to sparsity oracle inequalities and how does this generalize to other norms? We will see in the first lecture that one needs conditions which relate the penalty to the risk function. They have in a certain sense to be “compatible”. We discuss these compatibility conditions in the second lecture in the context of the Lasso, where the l₁-penalty needs to be compatible with the least squares risk, i.e. with the l₂-norm. We give as example the total variation penalty. For D := {x1,…,xn} ⊂ R an increasing sequence, the total variation of a function f : D -> R is the sum of the absolute values of its jump sizes. We derive compatibility and as a consequence a sparsity oracle inequality which shows adaptation to the number of jumps. In the third lecture we use sparsity to establish confidence intervals for a parameter of interest. The idea is to use the penalized estimator as an initial estimator in a one-step Newton-Raphson procedure. Functionals of this new estimator that can under certain conditions be shown to be asymptotically normally distributed. We show that in the high-dimensional case, one may further profit from sparsity conditions if the inverse Hessian of the problem is not sparse.en_US
dc.format.extent58:18 minutes
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.relation.ispartofseriesTRIAD Distinguished Lecture Seriesen_US
dc.subjectM-estimationen_US
dc.subjectOracle inequalitiesen_US
dc.subjectSparsityen_US
dc.titleSharp Oracle Inequalities for Non-Convex Lossen_US
dc.title.alternativeSparsity, Oracles and Inference in High-Dimensional Statisticsen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Transdisciplinary Research Institute for Advancing Data Scienceen_US
dc.contributor.corporatenameEidgenössische Technische Hochschule Zürichen_US
dc.contributor.corporatenameETH Zürichen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record