Stochastic Nonlinear Data-Reduction Methods with Detection & Prediction of Critical Rare Events
Dynamic data-driven application systems (DDDAS) can be simulated reliably on petaflop computers
if the current open issues of uncertainty modeling and their propagation as well as the handling
of petabyte outputs are satisfactorily resolved. To this end, this proposal will address three specific
current limitations in modeling stochastic systems: (1) the inputs are mostly based on ad hoc models,
(2) the number of independent parameters is very high, and (3) rare and critical events are difficult
to capture with existing algorithms. To overcome these problems, we propose research on the following
three topics: (1) development of certified low-dimensional models for effective reduction of
dimensionality, (2) development of a scalable sensitivity-based hierarchical uncertainty
quantification approach, and (3) development of algorithms for real-time anomaly detection and rare
events prediction. Our aim is to provide a comprehensive new mathematical and computational framework
for data analysis of petaflop stochastic simulations that can be used across many disciplines, although
the application focus is on atmospheric sciences. This work focuses primarily on data and will complement
the ongoing work on the fast solution of stochastic partial differential equations (PDEs) by the
Pacific Northwest National Laboratory-Brown University team.
The main idea in this project is to formulate new petabyte data-reduction techniques based on
fundamental extensions of the proper orthogonal decomposition (POD) to include nonlinearity and
stochasticity. POD has been used extensively in atmospheric and other geophysical sciences as
well as in systems control. However, low-order models based on linear POD dynamics can produce
erroneous results, especially for long-term predictions. We propose a new formulation of stochastic
POD using the proper spaces to deal with variability or uncertainty in the stochastic inputs.
Further, we extend POD to make it more efficient and accurate by employing nonlinear kernel
functions. These kernels may be ad hoc or can be constructed based on distance- or topology-preserving
algorithms that have been proposed recently in neuroscience. To represent stochasticity, we will
employ polynomial chaos expansions in conjunction with the analysis of variance (ANOVA) method.
In particular, we propose a two-hierarchical framework where coarse-level stochastic simulations
and corresponding sensitivity analysis first identify the most-sensitive parameters; subsequently,
different techniques are used to treat the most- and least-sensitive sets of parameters.
To calibrate the low-dimensional models efficiently for real-time detecting critical rare events,
we will develop a new Bayesian inference approach using the extracted appropriate low-dimensional
metrics from the experimental noisy data. The Bayesian solution is accelerated by the proposed
sensitivity-based hierarchical uncertainty quantification (UQ) method for the forward model.
The main difficulty
for real-time detecting critical rare events using the calibrated low-dimensional model is the two
widely separated time scales: the short scale of a transition event and the longer intrinsic time
scale of the dynamical system. We propose to develop adaptive minimum action method (MAM) and
geometric MAM to detect critical rare events efficiently on the calibrated low-dimensional models.
The discovered transition mechanism can then guide experiments or simulations for future predictions.
The new framework will be demonstrated for two cases: (1) predicting global warming scenarios using a
global climate model and available experimental data, and (2) studying the transition mechanism and
predicting future transitions in thermohaline circulation. The proposed framework will be applicable
to other areas of environmental sciences, combustion, plasma accelerators, and fusion physics.
|