Show simple item record

dc.contributor.authorAgarwal, Viraten_US
dc.date.accessioned2010-09-15T19:10:46Z
dc.date.available2010-09-15T19:10:46Z
dc.date.issued2010-06-28en_US
dc.identifier.urihttp://hdl.handle.net/1853/34839
dc.description.abstractAnalyzing massive-data sets and streams is computationally very challenging. Data sets in systems biology, network analysis and security use network abstraction to construct large-scale graphs. Graph algorithms such as traversal and search are memory-intensive and typically require very little computation, with access patterns that are irregular and fine-grained. The increasing streaming data rates in various domains such as security, mining, and finance leaves algorithm designers with only a handful of clock cycles (with current general purpose computing technology) to process every incoming byte of data in-core at real-time. This along with increasing complexity of mining patterns and other analytics puts further pressure on already high computational requirement. Processing streaming data in finance comes with an additional constraint to process at low latency, that restricts the algorithm to use common techniques such as batching to obtain high throughput. The primary contributions of this dissertation are the design of novel parallel data analysis algorithms for graph traversal on large-scale graphs, pattern recognition and keyword scanning on massive streaming data, financial market data feed processing and analytics, and data transformation, that capture the machine-independent aspects, to guarantee portability with performance to future processors, with high performance implementations on multicore processors that embed processorspecific optimizations. Our breadth first search graph traversal algorithm demonstrates a capability to process massive graphs with billions of vertices and edges on commodity multicore processors at rates that are competitive with supercomputing results in the recent literature. We also present high performance scalable keyword scanning on streaming data using novel automata compression algorithm, a model of computation based on small software content addressable memories (CAMs) and a unique data layout that forces data re-use and minimizes memory traffic. Using a high-level algorithmic approach to process financial feeds we present a solution that decodes and normalizes option market data at rates an order of magnitude more than the current needs of the market, yet portable and flexible to other feeds in this domain. In this dissertation we discuss in detail algorithm design challenges to process massive-data and present solutions and techniques that we believe can be used and extended to solve future research problems in this domain.en_US
dc.publisherGeorgia Institute of Technologyen_US
dc.subjectGraph algorithmsen_US
dc.subjectParallel computingen_US
dc.subjectMassive dataen_US
dc.subjectFinancial market dataen_US
dc.subjectStreaming dataen_US
dc.subjectKeyword scanningen_US
dc.subject.lcshData mining
dc.subject.lcshAlgorithms
dc.subject.lcshPattern recognition systems
dc.subject.lcshComputer algorithms
dc.titleAlgorithm design on multicore processors for massive-data analysisen_US
dc.typeDissertationen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentComputational Science and Engineeringen_US
dc.description.advisorCommittee Chair: Bader, David; Committee Member: Biros, George; Committee Member: Fujimoto, Richard; Committee Member: Perrone, Michael; Committee Member: Vuduc, Richen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record