Interval Deep Learning for Uncertainty Quantification in Engineering Problems
MetadataShow full item record
Deep neural networks are becoming more common in important real-world safety-critical applications where reliability in the predictions is paramount. Despite their exceptional prediction capabilities, current deep neural networks do not have an implicit mechanism to model and quantify significant input data uncertainty. In many cases, this uncertainty is epistemic and can arise from multiple sources such as sensor imprecision, imperfect information, missing data, and model uncertainty from other input models. Recent approaches to uncertainty modeling in deep learning have focused on quantifying model uncertainty in a post hoc fashion with Bayesian approximations (e.g., Monte Carlo Dropout) or classic frequentist approaches (e.g., confidence intervals). However, approaches to quantify uncertainty in end-to-end training of deep neural networks, with epistemic input data uncertainty, have not been studied at length. Moreover, oftentimes traditional frequentist and Bayesian approaches are not appropriate to model epistemic uncertainty. We argued that in many cases, we do not have enough knowledge about the epistemic uncertainties nor a way to interact with the world to obtain more data in order to apply traditional frequentist or Bayesian approaches. In this dissertation, we introduce what we term “interval deep learning” algorithms for engineering problems under uncertainty. Engineering problems are conceptualized as inherently interdisciplinary and consisting of a hierarchy of models, where data analysis models are a fundamental piece to solve the problem. In this work, we present interval deep learning algorithms as the data analysis component for engineering problems under uncertainty. This research seeks to prove that under conditions of epistemic uncertainty, deep interval neural networks provide reliable results for uncertainty quantification in engineering problems. In particular, novel interval deep learning algorithms were developed capable of quantifying input and model uncertainty through interval analysis and of producing guaranteed uncertainty prediction bounds—which we call the DINN. The guaranteed interval bounds are achieved through rigorous interval analysis. Then, we show the challenges and advantages of using interval analysis with gradient-based optimization for deep neural networks. Particular attention is paid to the mathematical, computational, and algorithmic aspects of interval analysis that make interval deep learning challenging, but an important tool for uncertainty quantification. We illustrate the effectiveness of using the developed interval deep learning algorithms in real-world engineering problems under uncertainty. In a monitoring application, the DINN is used to predict air pollution concentrations under sensor drift. Then, the developed methods are proposed in a forward problem in computational mechanics. In this task, the DINN is used to quantify the spatiotemporal variation of an uncertain input variable to be used for a finite element model, in a method that we call “supervised interval field.” Finally, applications for structural health monitoring (SHM) in large infrastructure systems in the context of this work are presented. We develop a unified damage identification framework for SHM systems under uncertainty using the DINN and test it with a real SHM dataset.