Iowa State University
SIAMSIAM CENTRAL 2019

MiniSymposium-07: Efficient Algorithms for Simulation and Quantification of Data-Driven and Uncertain Models

Abstract: Numerous physical processes depend on both continuous and discrete models. Simulating and hence analyzing the process crucially depends on the data-driven input that are measured through some physical systems and/or modeled in some functional form to represent certain known information. For example, quantities of interest (QoI) such as the annualized wind energy production and identification of unknown shapes are based on the discrete data respectively, from wind turbines and radars. However, QoI such as the average pressure or intensity of light are modeled through partial differential equations (PDE) for which the input information is assumed to be in functional form. Simulation of many inverse problems depend on both the continuous PDE models and discrete measured data. In general, such input data and functions are uncertain because of noisy measurements or lack of knowledge in all parameters involved in the modeled input functionals. The focus of this minisymposium is to discuss recent advances in development of algorithms to efficiently model, simulate, and quantity such data-driven and uncertain forward and inverse problems.

Organizers:

Saturday, October 19th, 2019 at 10:20 - 11:40 (274 Carver Hall)
10:20 - 10:40 Ramakrishna Tipireddy,
Pacific Northwest National Laboratory
Monte Carlo Methods for Basis Adaptation and Domain Decomposition Methods for High Dimensional SPDEs

In our prior work we developed stochastic basis adaption and domain decomposition methods for solving high dimensional stochastic partial differential equations (SPDEs) using polynomial chaos (PCE) based methods with Hermite polynomials. Here, we build on this prior work and propose stochastic basis adaption and domain decomposition methods using purely Monte Carlo (MC) and their variants such as Multilevel Monte Carlo (MLMC) methods for solving SPDEs with large number of input random parameters. Although PCE based methods work well for most of the SPDEs with the basis adaptation, they are still intractable if the reduced stochastic dimension is large. In such cases the Monte Carlo methods offer alternative approach to PCE methods that are flexible in modeling arbitrary uncertainties and implementing with legacy software. Latest developments such as MLMC methods reduce the computational cost without compromising on the accuracy by solving SPDE with different levels of spatial grid ranging from very coarse mesh to very fine mesh. In this approach we first decompose the spatial domain into a set of non-overlapping subdomains and in each subdomain solve the SPDE in a local basis adapted to that subdomain using Monte Carlo methods. We solve the local solution in each subdomain independently of each other while maintaining the continuity conditions for the solution and flux across the interface of the subdomains. We employ Neumann-Neumann based algorithm to compute the solution in the interior and at the interface of the subdomains. We present numerical experiments in support of our proposed method.

10:40 - 11:00 Brandon Reyes,
Colorado School of Mines
Efficient Algorithms for a Class of Space-Time Stochastic Models

Quantifying uncertainties in a quantity of interest (QoI), arising from space-time evolution of a non-deterministic physical process, is important for several applications. Practical realizations of these models may become computationally prohibitive using standard low-order methods, such as Monte Carlo (MC). In this talk we consider how high-order quasi Monte Carlo (QMC) stochastic approximations and adaptive multilevel QMC algorithms can address these computational challenges. We demonstrate these techniques by computing statistical moments of a QoI induced by a stochastic order parameter phase separation field. The field is modeled by a nonlinear two-/three-space dimensional Allen-Cahn PDE with random gradient energy and an uncertain initial state induced by a random field.

11:00 - 11:20 Mahadevan Ganesh,
Colorado School of Mines
A Data-Driven Bayesian and Decomposed Offline Algorithm for Uncertain Dielectric Media

We consider the problem of reconstructing a set of uncertain parameters that describe a three dimensional (3D) dielectric medium. The reconstruction process is driven by the radar cross section (RCS) data, measured at a few observation directions. The RCS data is modeled through a 3D Maxwell dielectric system and its spatially high-order discrete computational counterpart. We develop a surrogate forward computational model for the stochastic Maxwell system, using a decomposed fast generalized polynomial chaos (gPC) approach. Based on the surrogate model we develop an efficient, RCS data-driven, Bayesian model to reconstruct the uncertain medium. Offline construction of the surrogate model facilitates fast online evaluation of the posterior distribution of the dielectric medium parameters. Parallel computational experiments demonstrate the efficiency of our deterministic, forward stochastic, and inverse dielectric computer models. (This work is joint with S.C. Hawkins and D.Volkov.)

11:20 - 11:40 Darko Volkov,
Worcester Polytechnic Institute
A Well-Posed Surface Currents and Charges System for Electromagnetism in Dielectric Media

The free space Maxwell dielectric problem can be reduced to a system of surface integral equations (SIE). A numerical formulation for the Maxwell dielectric problem using an SIE system presents two key advantages: first, the radiation condition at infinity is exactly satisfied, and second, there is no need to artificially define a truncated domain. Consequently, these SIE systems have generated much interest in physics, electrical engineering, and mathematics, and many SIE formulations have been proposed over time. In this talk we introduce a new SIE formulation which is in the desirable operator form identity plus compact, is well-posed, and remains well-conditioned as the frequency tends to zero. The unknowns in the formulation are three dimensional vector fields on the boundary of the dielectric body. The resulting SIE discussed in this talk is derived from a formulation developed in earlier work. Our initial formulation utilized linear constraints to obtain a uniquely solvable system for all frequencies. The new SIE introduced and analyzed in this talk combines the integral equations from this initial formulation with new constraints. We show that the new system is in the operator form identity plus compact in a particular functional space, and we prove well-posedness at all frequencies and low-frequency stability of the new SIE.


Saturday, October 19th, 2019 at 14:40 - 16:00 (274 Carver Hall)
14:40 - 15:00 David Kozak,
Colorado School of Mines
Gradient Free Minimization in the Presence of Noise

The depth of research into gradient-based optimization belies the fact that it is common to model real world phenomena using complex models for which no gradient is readily available. Zeroth order optimization, the best known variant being finite differences, approximates the gradient by querying function values and uses the approximate gradient to descend towards an optima. This process can be time consuming in high-dimensions, particularly when function evaluations are expensive (in terms of time, memory, or both) as in many physical experiments. Furthermore, finite difference approximations are heavily susceptible to noisy function evaluations which may lead to a poor approximation of the gradient. In this work we provide an alternative to finite differences and coordinate descent wherein at each iteration we project the gradient onto a low-dimensional random subspace and descend along this subspace. It is shown analytically that this method is more robust than randomized coordinate descent, and empirically that it is preferable to finite-difference descent. We provide convergence results for strongly-convex functions in the case when the function evaluations are noise-free, and also when they are noisy. Empirical results on synthetic data are provided to enhance understanding. We also show results on a high-dimensional PDE-constrained shape optimization problem.

15:00 - 15:20 Jacob Rezac,
National Institute of Standards and Technology
A Sparsity-Constrained Qualitative Method For Parameter Estimation in Inverse Scattering and Direction-of-Arrival Problems

We introduce a method for estimating unknown physical characteristics of a region from measured data. The new technique is qualitative in nature, meaning that it does not require the solution of a forward problem in order to solve the inverse problem. We are interested in two separate problems of estimating the location and shape of a scattering obstacle from waves which have scattered from it and in estimating the direction-of-arrival of a wave impinging on receiving array. Instead of simulating the measurement process to estimate these parameters, we solve a sparsely-constrained minimization problem at locations inside the region-of-interest, whose solution is non-zero only when the location corresponds to one of the unknown parameters of interest. We demonstrate the new technique on both measured and simulated data and show that it can outperforms some classical techniques, particularly in the case of limited and noisy data.

15:20 - 15:40 Ambuj Pandey,
California Institute of Technology
Fast, Higher-Order Direct/Iterative Hybrid Solver for Scattering by Inhomogeneous Media – with Application to High-Frequency and Discontinuous Refractivity Problems

A fast high-order method for the solution of two-dimensional problems of scattering by penetrable inhomogeneous media will be presented, with application to high-frequency configurations containing a (possibly) discontinuous refractivity. The method relies on a combination of a differential volumetric formulation and a boundary integral formulation. Thus, in the proposed method the entire computational domain is partitioned into large numbers of volumetric spectral approximation patches, which are then grouped into sub-domains consisting of adequately-chosen groups of patches, and, finally, an overarching integral equation formulation on the overall domain boundary. The resulting algorithm can be quite effective: after a modestly-demanding precomputation stage (whose results for a given frequency can be repeatedly used for arbitrarily chosen incidence angles), the proposed algorithm can accurately evaluate scattering by very large objects, and with very high contrasts in the refractive index variations (including possibly refractive-index discontinuities), in single-core computing times of a few seconds.

15:40 - 16:00 John Luke Lusty,
Colorado School of Mines
An Efficient Data-Driven Wind Energy Computational Model

Outlier detection algorithms are utilized by operational analysts working within the field of wind energy to sanitize their datasets before fitting one of the various industry-standard models for wind turbine power curves. These algorithms, typically based on statistical theory, possess varied and intricate parameter spaces which may be probed using idealized Monte Carlo simulations to understand the uncertainty of their very use. However, when a sequence of filters is applied to the dataset as opposed to just one, the presence of what we refer to as ordering uncertainty makes for a challenging uncertainty quantification problem which is highly applicable to those industries making ready use of statistical-based algorithms for outlier detection. In this talk, we consider a computational model for such an analyses using a highly performance and generic software library capable of interfacing with any outlier detection algorithm, any model fitting procedure, and invoking the proper calling convention for that model after it has been fit to perform some evaluation of that model.

Coordinated by
Iowa State University