Parallel algorithms for enabling fast and scalable analysis of high-throughput sequencing datasets
MetadataShow full item record
The objective of this research is to develop parallel algorithms for enabling fast and scalable analysis of large-scale high-throughput sequencing datasets. Genome of an organism consists of one or more long DNA sequences called chromosomes, each a sequence of bases. Depending on the organism, the length of the genome can vary from several thousand bases to several billion bases. Genome sequencing, which involves deciphering the sequence of bases of the genome, is an important tool in genomics research. Sequencing instruments widely deployed today can only read short DNA sequences. However, these instruments can read up to several billion such sequences at a time, and are used to sequence a large number of randomly generated short fragments from the genome. These fragments are a few hundred bases long and are commonly referred to as “reads”. This work specifically tackles three problems associated with high-throughput sequencing short read datasets: (1) Parallel read error correction for large-scale genomics datasets, (2) Partitioning of large-scale high-throughput sequencing datasets, and (3) Parallel compression of large-scale genomics datasets.