Show simple item record

dc.contributor.authorKong, Seunghyunen_US
dc.date.accessioned2007-05-25T17:24:42Z
dc.date.available2007-05-25T17:24:42Z
dc.date.issued2007-04-04en_US
dc.identifier.urihttp://hdl.handle.net/1853/14529
dc.description.abstractThis thesis is a computational study of recently developed algorithms which aim to overcome degeneracy in the simplex method. We study the following algorithms: the non-negative least squares algorithm, the least-squares primal-dual algorithm, the least-squares network flow algorithm, and the combined-objective least-squares algorithm. All of the four algorithms use least-squares measures to solve their subproblems, so they do not exhibit degeneracy. But they have never been efficiently implemented and thus their performance has also not been proved. In this research we implement these algorithms in an efficient manner and improve their performance compared to their preliminary results. For the non-negative least-squares algorithm, we develop the basis update technique and data structure that fit our purpose. In addition, we also develop a measure to help find a good ordering of columns and rows so that we have a sparse and concise representation of QR-factors. The least-squares primal-dual algorithm uses the non-negative least-squares problem as its subproblem, which minimizes infeasibility while satisfying dual feasibility and complementary slackness. The least-squares network flow algorithm is the least-squares primal-dual algorithm applied to min-cost network flow instances. The least-squares network flow algorithm can efficiently solve much bigger instances than the least-squares primal-dual algorithm. The combined-objective least-squares algorithm is the primal version of the least-squares primal-dual algorithm. Each subproblem tries to minimize true objective and infeasibility simultaneously so that optimality and primal feasibility can be obtained together. It uses a big-M to minimize the infeasibility. We developed the techniques to improve the convergence rates of each algorithm: the relaxation of complementary slackness condition, special pricing strategy, and dynamic-M value. Our computational results show that the least-squares primal-dual algorithm and the combined-objective least-squares algorithm perform better than the CPLEX Primal solver, but are slower than the CPLEX Dual solver. The least-squares network flow algorithm performs as fast as the CPLEX Network solver.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectNetwork flowen_US
dc.subjectLinear programmingen_US
dc.subjectLeast squaresen_US
dc.subjectPrimal-dual methoden_US
dc.titleLinear Programming Algorithms Using Least-Squares Methoden_US
dc.typeDissertationen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentIndustrial and Systems Engineeringen_US
dc.description.advisorCommittee Chair: Ellis L. Johnson; Committee Co-Chair: Earl Barnes; Committee Member: Joel Sokol; Committee Member: Martin Savelsbergh; Committee Member: Prasad Tetalien_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record