Show simple item record

dc.contributor.authorSaket, Rishien_US
dc.date.accessioned2009-08-26T17:47:37Z
dc.date.available2009-08-26T17:47:37Z
dc.date.issued2009-06-29en_US
dc.identifier.urihttp://hdl.handle.net/1853/29681
dc.description.abstractIn this thesis we prove intractability results for well studied problems in computational learning and approximation. Let ε , mu > 0 be arbitrarily small constants and t be an arbitrary constant positive integer. We show an almost optimal hardness factor of d[superscript{1-ε}] for computing an equivalent DNF expression with minimum terms for a boolean function on d variables, given its truth table. In the study of weak learnability, we prove an optimal 1/2 + ε inapproximability for the accuracy of learning an intersection of two halfspaces with an intersection of t halfspaces. Further, we study the learnability of small DNF formulas, and prove optimal 1/2 + ε inapproximability for the accuracy of learning (i) a two term DNF by a t term DNF, and (ii) an AND under adversarial mu-noise by a t-CNF. In addition, we show a 1 - 2[superscript{-d}] + ε inapproximability for accurately learning parities (over GF(2)), under adversarial mu-noise, by degree d polynomials, where d is a constant positive integer. We also provide negative answers to the possibility of stronger semi-definite programming (SDP) relaxations yielding much better approximations for graph partitioning problems such as Maximum Cut and Sparsest Cut by constructing integrality gap examples for them. For Maximum Cut and Sparsest Cut we construct examples -- with gaps alpha[superscript{-1}] - ε (alpha is the Goemans-Williamson constant) and Omega((logloglog n)[superscript{1/13}]) respectively -- for the standard SDP relaxations augmented with O((logloglog n)[superscript{1/6}]) rounds of Sherali-Adams constraints. The construction for Sparsest Cut also implies that an n-point negative type metric may incur a distortion of Omega((logloglog n)[superscript{1/ 13}]) to embed into ell_1 even if the induced submetric on every subset of O((logloglog n)[superscript{1/6}]) points is isometric to ell_1. We also construct an integrality gap of Omega(loglog n) for the SDP relaxation for Uniform Sparsest Cut problem augmented with triangle inequalities, disproving a well known conjecture of Arora, Rao and Vazirani.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectIntegrality gapsen_US
dc.subjectApproximationen_US
dc.subjectHardnessen_US
dc.subjectLearningen_US
dc.subject.lcshCombinatorial optimization
dc.subject.lcshComputational learning theory
dc.subject.lcshMachine learning
dc.titleIntractability results for problems in computational learning and approximationen_US
dc.typeDissertationen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentComputingen_US
dc.description.advisorCommittee Chair: Khot, Subhash; Committee Member: Tetali, Prasad; Committee Member: Thomas, Robin; Committee Member: Vempala, Santosh; Committee Member: Vigoda, Ericen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record