Georgia Tech Theses and Dissertations
http://hdl.handle.net/1853/3739
Electronic Theses and DissertationsSun, 23 Oct 2016 14:49:35 GMT2016-10-23T14:49:35ZParallel wavelet-adaptive direct numerical simulation of multiphase flows with phase-change
http://hdl.handle.net/1853/55692
Parallel wavelet-adaptive direct numerical simulation of multiphase flows with phase-change
Forster, Christopher J.
High-powered and high-energy density electronics are becoming more common with advances in computing, electric vehicles, and modern defense systems. Applications like these require efficient, compact, and economical heat exchanger designs capable of extremely large heat fluxes. Phase-change cooling methods allow for these characteristics; however, the design and optimization of these devices is extremely challenging. Numerical simulations can assist in this effort by providing details of the flow that are inaccessible to experimental measurements. One such system of interest to this work is acoustically enhanced nucleate boiling, which is capable of dramatic increases in the Critical Heat Flux (CHF). The focus of the present work is the development of a numerical simulation capable of predicting the behavior of acoustically enhanced nucleate boiling up to the CHF. A general-purpose wavelet-adaptive Direct Numerical Simulation (DNS) that runs entirely on the Graphics Processing Unit (GPU) architecture has been developed in this work to allow accurate, error-controlled simulation of a wide range of applications with multiphase flow at all Mach numbers. This work focuses on the development of a high-order simulation framework that can adequately address the challenges posed by acoustically enhanced nucleate boiling processes. Nucleate boiling in the presence of acoustic fields suffers from a large disparity of important time scales, namely the acoustic time scale and the convective time scale near the incompressible limit. To address this issue, the compressible Navier-Stokes equations are solved using a preconditioned dual-time stepping method to allow for accurate simulation of the flow for all Mach numbers, everywhere in the domain. The governing equations are solved on a wavelet-adaptive grid that provides a direct measure of local error and is adapted at every time step to follow the evolution of the flow for a significant reduction in computational resources and expense. The use of the wavelet-adaptive grid and the dual-time stepping method together allows for rigorous error control in both space and time. All components of this simulation have been redesigned and optimized for efficient implementation on the GPU architecture to offset the overhead of grid adaptation and further reduce time-to-solution. The development of the high-performance, error-controlled computational framework and its verification and validation is presented.
Thu, 04 Aug 2016 00:00:00 GMThttp://hdl.handle.net/1853/556922016-08-04T00:00:00ZForster, Christopher J.High-powered and high-energy density electronics are becoming more common with advances in computing, electric vehicles, and modern defense systems. Applications like these require efficient, compact, and economical heat exchanger designs capable of extremely large heat fluxes. Phase-change cooling methods allow for these characteristics; however, the design and optimization of these devices is extremely challenging. Numerical simulations can assist in this effort by providing details of the flow that are inaccessible to experimental measurements. One such system of interest to this work is acoustically enhanced nucleate boiling, which is capable of dramatic increases in the Critical Heat Flux (CHF). The focus of the present work is the development of a numerical simulation capable of predicting the behavior of acoustically enhanced nucleate boiling up to the CHF. A general-purpose wavelet-adaptive Direct Numerical Simulation (DNS) that runs entirely on the Graphics Processing Unit (GPU) architecture has been developed in this work to allow accurate, error-controlled simulation of a wide range of applications with multiphase flow at all Mach numbers. This work focuses on the development of a high-order simulation framework that can adequately address the challenges posed by acoustically enhanced nucleate boiling processes. Nucleate boiling in the presence of acoustic fields suffers from a large disparity of important time scales, namely the acoustic time scale and the convective time scale near the incompressible limit. To address this issue, the compressible Navier-Stokes equations are solved using a preconditioned dual-time stepping method to allow for accurate simulation of the flow for all Mach numbers, everywhere in the domain. The governing equations are solved on a wavelet-adaptive grid that provides a direct measure of local error and is adapted at every time step to follow the evolution of the flow for a significant reduction in computational resources and expense. The use of the wavelet-adaptive grid and the dual-time stepping method together allows for rigorous error control in both space and time. All components of this simulation have been redesigned and optimized for efficient implementation on the GPU architecture to offset the overhead of grid adaptation and further reduce time-to-solution. The development of the high-performance, error-controlled computational framework and its verification and validation is presented.Impact of connecting different types of energy simulation models for data center cooling and waste heat re-utilization
http://hdl.handle.net/1853/55691
Impact of connecting different types of energy simulation models for data center cooling and waste heat re-utilization
Mok, SeungHo
Data centers are the facilities that house information technology (IT) equipment used for our daily digital activities. They are considered as one of the largest energy consumers and also the fastest growing industries in the world. Since data centers consume a significantly large amount of electricity, which results in a large amount of heat that must be dealt with, data center cooling has been a very important topic. Due to such a large-scale system that a data center may require, it is critical to start with meticulously investigating cooling strategies for the data center before it begins to be constructed. The main purpose of this study is to develop energy simulation models that can be used to estimate overall data center efficiency for the location of interest. This large-scale modeling can be accomplished by developing several component-level models, which may be built in different modeling tools and interact with each other. This thesis considers four cooling scenarios of a 400 kW data center, and they are as follows: (a) an air cooled data center with a rotary regenerative heat exchanger and DX cooling system, (b) a hybrid cooled data center with an air cooled chiller and DX cooling system, (c) a hybrid cooled data center utilizing warm water through a liquid-to-liquid heat exchanger and DX cooling system, (d) a hybrid cooled data center that uses rear door heat exchangers and a water cooled chiller. As significant streams of waste heat are created from most data centers, this study also considers currently available or developmental low-grade waste heat re-use techniques including domestic heating, water pre-heating, and direct power generation from thermoelectric generators. Each component used in these scenarios is separately modeled in several modeling tools, and the component-level models will eventually be linked to run annual energy simulation for selected climate.
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/1853/556912016-08-01T00:00:00ZMok, SeungHoData centers are the facilities that house information technology (IT) equipment used for our daily digital activities. They are considered as one of the largest energy consumers and also the fastest growing industries in the world. Since data centers consume a significantly large amount of electricity, which results in a large amount of heat that must be dealt with, data center cooling has been a very important topic. Due to such a large-scale system that a data center may require, it is critical to start with meticulously investigating cooling strategies for the data center before it begins to be constructed. The main purpose of this study is to develop energy simulation models that can be used to estimate overall data center efficiency for the location of interest. This large-scale modeling can be accomplished by developing several component-level models, which may be built in different modeling tools and interact with each other. This thesis considers four cooling scenarios of a 400 kW data center, and they are as follows: (a) an air cooled data center with a rotary regenerative heat exchanger and DX cooling system, (b) a hybrid cooled data center with an air cooled chiller and DX cooling system, (c) a hybrid cooled data center utilizing warm water through a liquid-to-liquid heat exchanger and DX cooling system, (d) a hybrid cooled data center that uses rear door heat exchangers and a water cooled chiller. As significant streams of waste heat are created from most data centers, this study also considers currently available or developmental low-grade waste heat re-use techniques including domestic heating, water pre-heating, and direct power generation from thermoelectric generators. Each component used in these scenarios is separately modeled in several modeling tools, and the component-level models will eventually be linked to run annual energy simulation for selected climate.Near surface evaluation of structural, electronic and chemical properties of templated PT monolayers
http://hdl.handle.net/1853/55690
Near surface evaluation of structural, electronic and chemical properties of templated PT monolayers
Vitale, Adam J..
Platinum group metals are the choice catalysts for a wide variety of catalytic reactions, including oxygen reduction. The focus of this study is to explore the dimensional aspect of both electronic and structure-driven surface properties of Pt monolayers grown via templating on Au. Surface limited redox replacement is used to provide precise layer-by-layer growth of Pt to synthesize well-controlled ‘core-shell’ catalyst architectures.
The interaction between core and shell manifests itself through both a structural contribution of epitaxial strain and d-electron orbital mixing. The cumulative effect of the secondary support on the surface Pt and its interaction with adsorbate species is referred to as a ligand effect. The main goal of the research is to investigate how these ligand effects contribute to the structural and electronic properties of Pt monolayer catalysts.
One focus of this study is to explore the incorporation of single layer graphene into the core-shell catalyst architecture. Fully wetted 4-5 monolayer Pt films can be grown on graphene, maximizing the exposed catalyst surface with high Pt activity and stability. The research also looks to investigate the use of single-layer graphene as an intimate capping sheet to prevent surface dissolution of electrode metals into the electrolyte, without adversely affecting activity.
X-ray photoelectron spectroscopy and extended x-ray absorption fine structure techniques are used to examine surface composition and local atom-atom correlations (bond distance, strain, coordination) as well as core-shell charge transfer effects. Cyclic voltammetry and the oxygen reduction reaction are used as probes to examine the electrochemically active area of Pt monolayers and catalyst activity, respectively.
Fri, 29 Jul 2016 00:00:00 GMThttp://hdl.handle.net/1853/556902016-07-29T00:00:00ZVitale, Adam J..Platinum group metals are the choice catalysts for a wide variety of catalytic reactions, including oxygen reduction. The focus of this study is to explore the dimensional aspect of both electronic and structure-driven surface properties of Pt monolayers grown via templating on Au. Surface limited redox replacement is used to provide precise layer-by-layer growth of Pt to synthesize well-controlled ‘core-shell’ catalyst architectures.
The interaction between core and shell manifests itself through both a structural contribution of epitaxial strain and d-electron orbital mixing. The cumulative effect of the secondary support on the surface Pt and its interaction with adsorbate species is referred to as a ligand effect. The main goal of the research is to investigate how these ligand effects contribute to the structural and electronic properties of Pt monolayer catalysts.
One focus of this study is to explore the incorporation of single layer graphene into the core-shell catalyst architecture. Fully wetted 4-5 monolayer Pt films can be grown on graphene, maximizing the exposed catalyst surface with high Pt activity and stability. The research also looks to investigate the use of single-layer graphene as an intimate capping sheet to prevent surface dissolution of electrode metals into the electrolyte, without adversely affecting activity.
X-ray photoelectron spectroscopy and extended x-ray absorption fine structure techniques are used to examine surface composition and local atom-atom correlations (bond distance, strain, coordination) as well as core-shell charge transfer effects. Cyclic voltammetry and the oxygen reduction reaction are used as probes to examine the electrochemically active area of Pt monolayers and catalyst activity, respectively.Markov chains for weighted lattice structures
http://hdl.handle.net/1853/55689
Markov chains for weighted lattice structures
Bhakta, Prateek Jayeshbhai
Markov chains are an essential tool for sampling from large sets, and are ubiquitous across many scientific fields, including statistical physics, industrial engineering, and computer science. To be a useful tool for sampling, the number of steps needed for a Markov chain to converge approximately to the target probability distribution, also known as the mixing time, should be a small polynomial in n, the size of a state. We study problems that arise from the design and analysis of Markov chains that sample from configurations of lattice structures. Specifically, we will be interested in settings where each state is sampled with a non-uniform weight that depends on the structure of the configuration. These weighted lattice models arise naturally in many contexts, and are typically more difficult to analyze than their unweighted counterparts. Our focus will be on exploiting these weightings both to develop new efficient algorithms for sampling and to prove new mixing time bounds for existing Markov chains. First, we will present an efficient algorithm for sampling fixed rank elements from a graded poset, which includes sampling integer partitions of n as a special case. Then, we study the problem of sampling weighted perfect matchings on lattices using a natural Markov chain based on "rotations", and provide evidence towards understanding why this Markov chain has empirically been observed to converge slowly. Finally, we present and analyze a generalized version of the Schelling Segregation model, first proposed in 1971 by economist Thomas Schelling to explain possible causes of racial segregation in cities. We identify conditions under which segregation, or clustering, is likely or unlikely to occur. Our analysis techniques for all three problems are drawn from the interface of theoretical computer science with discrete mathematics and statistical physics.
http://hdl.handle.net/1853/55689Bhakta, Prateek JayeshbhaiMarkov chains are an essential tool for sampling from large sets, and are ubiquitous across many scientific fields, including statistical physics, industrial engineering, and computer science. To be a useful tool for sampling, the number of steps needed for a Markov chain to converge approximately to the target probability distribution, also known as the mixing time, should be a small polynomial in n, the size of a state. We study problems that arise from the design and analysis of Markov chains that sample from configurations of lattice structures. Specifically, we will be interested in settings where each state is sampled with a non-uniform weight that depends on the structure of the configuration. These weighted lattice models arise naturally in many contexts, and are typically more difficult to analyze than their unweighted counterparts. Our focus will be on exploiting these weightings both to develop new efficient algorithms for sampling and to prove new mixing time bounds for existing Markov chains. First, we will present an efficient algorithm for sampling fixed rank elements from a graded poset, which includes sampling integer partitions of n as a special case. Then, we study the problem of sampling weighted perfect matchings on lattices using a natural Markov chain based on "rotations", and provide evidence towards understanding why this Markov chain has empirically been observed to converge slowly. Finally, we present and analyze a generalized version of the Schelling Segregation model, first proposed in 1971 by economist Thomas Schelling to explain possible causes of racial segregation in cities. We identify conditions under which segregation, or clustering, is likely or unlikely to occur. Our analysis techniques for all three problems are drawn from the interface of theoretical computer science with discrete mathematics and statistical physics.