Show simple item record

dc.contributor.authorMinsker, Stanislaven_US
dc.date.accessioned2012-09-20T18:20:29Z
dc.date.available2012-09-20T18:20:29Z
dc.date.issued2012-07-05en_US
dc.identifier.urihttp://hdl.handle.net/1853/44808
dc.description.abstractThis dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning. Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem. The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectActive learningen_US
dc.subjectSparse recoveryen_US
dc.subjectOracle inequalityen_US
dc.subjectConfidence bandsen_US
dc.subjectInfinite dictionaryen_US
dc.subject.lcshEstimation theory Asymptotic theory
dc.subject.lcshEstimation theory
dc.subject.lcshDistribution (Probability theory)
dc.subject.lcshPrediction theory
dc.subject.lcshActive learning
dc.subject.lcshAlgorithms
dc.subject.lcshMathematical optimization
dc.subject.lcshChebyshev approximation
dc.titleNon-asymptotic bounds for prediction problems and density estimation.en_US
dc.typeDissertationen_US
dc.description.degreePhDen_US
dc.contributor.departmentMathematicsen_US
dc.description.advisorCommittee Chair: Koltchinskii, Vladimir; Committee Member: Bakhtin, Yuri; Committee Member: Balcan, Maria-Florina; Committee Member: Houdre, Christian; Committee Member: Romberg, Justinen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record