Multi Delity Monte Carlo Estimation Of Variance And Sensitivity Indices

1y ago
8 Views
1 Downloads
627.82 KB
24 Pages
Last View : 20d ago
Last Download : 3m ago
Upload by : Carlos Cepeda
Transcription

c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley,SIAM/ASA J. UNCERTAINTY QUANTIFICATIONVol. 6, No. 2, pp. 683–706Velimir Vesselinov, and Karen WillcoxMultifidelity Monte Carlo Estimation of Variance and Sensitivity Indices Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpE. Qian† , B. Peherstorfer‡ , D. O’Malley§ , V. V. Vesselinov§ ,andK. Willcox†Abstract. Variance-based sensitivity analysis provides a quantitative measure of how uncertainty in a modelinput contributes to uncertainty in the model output. Such sensitivity analyses arise in a widevariety of applications and are typically computed using Monte Carlo estimation, but the manysamples required for Monte Carlo to be sufficiently accurate can make these analyses intractablewhen the model is expensive. This work presents a multifidelity approach for estimating sensitivityindices that leverages cheaper low-fidelity models to reduce the cost of sensitivity analysis whileretaining accuracy guarantees via recourse to the original, expensive model. This paper developsnew multifidelity estimators for variance and for the Sobol’ main and total effect sensitivity indices.We discuss strategies for dividing limited computational resources among models and specify arecommended strategy. Results are presented for the Ishigami function and a convection-diffusionreaction model that demonstrate up to 10 speedups for fixed convergence levels. For the problemstested, the multifidelity approach allows inputs to be definitively ranked in importance when MonteCarlo alone fails to do so.Key words. multifidelity, Monte Carlo, global sensitivity analysisAMS subject classifications. 62P30, 65C05DOI. 10.1137/17M11510061. Introduction. Sensitivity analysis plays a central role in modeling and simulation tosupport decision making, providing a rigorous basis on which to characterize how input uncertainty contributes to output uncertainty. This allows identification of the most significantsources of input uncertainty as well as identification of uncertain inputs whose variation contributes minimally to output variability. Such sensitivity analyses arise in a wide variety ofapplications—for example, for models with uncertain inputs of high dimension, it is oftennecessary to reduce the input dimension in order to make tractable the tasks of policy optimization, robust design optimization, and reduced-order modeling. Variance-based globalsensitivity analysis quantifies the relative effect of input uncertainties on the output uncertainty via the calculation of sensitivity indices, enabling the prioritization of inputs with larger Received by the editors October 9, 2017; accepted for publication (in revised form) February 20, 2018; publishedelectronically May 15, htmlFunding: The work of the first author was supported by the National Science Foundation Graduate ResearchFellowship and the Fannie and John Hertz Foundation. This research was supported by the Air Force Center ofExcellence on Multi-Fidelity Modeling of Rocket Combustor Dynamics, Award Number FA9550-17-1-0195, as well asthe U.S. Department of Energy, Office of Advanced Scientific Computing Research (ASCR), Applied MathematicsProgram, awards DE-FG02-08ER2585 and DE-SC0009297, as part of the DiaMonD Multifaceted MathematicsIntegrated Capability Center.†Department of Aeronautics and Astronautics, MIT, Cambridge, MA (elizqian@mit.edu, kwillcox@mit.edu).‡Department of Mechanical Engineering, University of Wisconsin–Madison, Madison, WI (peherstorfer@wisc.edu).§Computational Earth Science Group, Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, NM (omalled@lanl.gov, vvv@lanl.gov).683c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php684E. QIAN, B. PEHERSTORFER, D. O’MALLEY, V. V. VESSELINOV, AND K. WILLCOXinfluence on the output (e.g., by fixing relatively unimportant inputs). Estimates of thesesensitivity indices are typically obtained via Monte Carlo integration, which often requiresmany model evaluations to obtain accurate sensitivity estimates. This work presents a multifidelity formulation—leveraging approximate models to accelerate convergence—for MonteCarlo estimation of sensitivity indices. In particular, we develop and analyze new multifidelityestimators for variance and for the Sobol’ main and total effect sensitivities.Sensitivity analysis plays an important role in the development and analysis of numericalmodels; see [20] for a comprehensive review. In addition to the classical local approach,entailing deterministic calculation of partial derivatives, a multitude of global approachesexist, which seek to characterize how the overall input uncertainty affects the uncertainty ofthe model output. These include screening methods [30], correlation ratios [16, 24, 25, 46],variance-based methods [11, 18, 36, 40, 45, 50], entropy-based methods [28], and momentindependent methods [3, 4, 8, 41]. Our interest is in the Sobol’ variance-based sensitivityindices [50], which attribute portions of output variance to the influence of individual inputsand their interactions and have been used in a variety of applications [1, 7, 14, 34, 47].Sobol’ sensitivity indices are typically estimated via Monte Carlo methods, using “fixingmethods” which estimate sensitivity indices by holding one or more inputs constant whilevarying the others. Rather than using an inner and outer Monte Carlo loop to do so, Saltelliet al. propose a method that allows computation of all sensitivity indices using a singleloop [44]. Saltelli’s method is commonly used in application, but the number of functionevaluations required per Monte Carlo sample scales linearly with the number of uncertain inputparameters. This, combined with the sublinear convergence rate of the root mean-squarederror (RMSE) of Monte Carlo estimators, means that computation of Sobol’ sensitivitiesquickly becomes computationally prohibitive when the model is expensive and the dimensionof the input is high. To address this, several authors have proposed estimators with improvedprecision, including [22, 29, 35]. These estimators have reduced variance relative to thoseinitially proposed by Sobol’, although some of them are biased. To achieve further acceleration,other previous work [1, 2, 19, 49] has achieved speedups by replacing the expensive modelwith cheaper low-fidelity models (sometimes referred to as metamodels or surrogates). Onedrawback of these approaches is that estimation based on the surrogate model introduces biasrelative to the high-fidelity model, and procedures for error estimation exist only in limitedsettings [15, 23, 51]. We develop a multifidelity approach which combines the high-fidelitymodel with low-fidelity models, resulting in computational speedups and retaining accuracy.Multifidelity formulations have in recent years been shown to provide significant computational gains in Monte Carlo estimation for uncertainty propagation and for optimizationunder uncertainty [39]. For uncertainty propagation, multifidelity formulations use surrogatemodels to reduce the cost of Monte Carlo estimators. The multilevel Monte Carlo methodemploys a hierarchy of coarse grids and exploits the known relationships between error andcost at each grid level [13] and has been used to accelerate the convergence of variance estimation [2]. In stochastic collocation, the outputs of a low-fidelity model are corrected witha discrepancy model that accounts for the difference between the high- and the low-fidelitymodel [12]. Multifidelity stochastic collocation approaches have been shown to have boundederror and fast convergence [31, 52]. The multifidelity Monte Carlo (MFMC) method [32, 38]accelerates the estimation of model statistics by using general surrogate models as controlc 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpMULTIFIDELITY GLOBAL SENSITIVITY ANALYSIS685variates. In the particular case of optimization under uncertainty, the control variate canbe formed using the high-fidelity model’s autocorrelation across the design space—i.e., usingmodel evaluations from previous optimization iterates at nearby design points as a so-calledinformation reuse control variate [33].Here, we build on the MFMC method [38] and present new multifidelity estimators forthe variance and main effect sensitivity indices. Multifidelity formulations which target estimation of Sobol’ indices have been presented in [15, 26, 27, 37]. The work in [15] considers asetting where, in addition to having random inputs, the model itself is stochastic. We do notconsider that setting here. In [27], the Sobol’ indices are sampled from a Gaussian processwhich approximates a high-fidelity computer code. In this approach, lower-fidelity modelscan be introduced via a cokriging model, thus increasing the quality of the Gaussian processapproximation without incurring additional evaluations of the expensive high-fidelity model.In [37], the Sobol’ indices are obtained from a polynomial chaos expansion derived from alow-fidelity model with a correction polynomial chaos expansion derived from the differencebetween the low- and the high-fidelity model at some inputs. In both [27, 37], the Sobol’indices are obtained for a surrogate model, and the multifidelity formulation serves to efficiently increase the quality of the surrogate employed. This means that the result will bebiased relative to the original high-fidelity model. In contrast, our approach samples directlyfrom the high-fidelity model to ensure that our estimates are unbiased. Our approach is mostclosely related to the control variate formulation of [26], which uses the first-order terms ofthe analysis-of-variance (ANOVA) decomposition as the control variate. Our framework isalso based on control variates but does not restrict the type or number of surrogate modelsused. Additionally, the work in this paper presents a strategy for distributing work amongthe available models given a limited computational budget.Section 2 introduces the Sobol’ variance-based sensitivity indices and Monte Carlo estimation procedure. Section 3 introduces our multifidelity formulations for variance and sensitivityindex estimation, presents their accuracy guarantees, and discusses model management strategies. Results are presented for an analytical example in section 4 and for a numerical examplein section 5. Conclusions are presented in section 6.2. Setting. In this section, we introduce Sobol’ global sensitivity analysis. Subsection 2.1presents the underlying mathematical theory and defines the Sobol’ main and total effectsensitivity indices. Subsection 2.2 introduces the corresponding Monte Carlo estimators.2.1. Variance-based global sensitivity analysis. Consider a model f : Z Y that mapsa d-dimensional input z Z Rd to a scalar output y Y R of our system of interest.The input domain Z Z1 · · · Zd is the product of the domains Z1 , . . . , Zd R. Let(Ω, F, P) be a probability space with sample space Ω, σ-algebra F, and probability measure P,and let Z : Ω Z be a random vector Z (Z(1), . . . , Z(d))T with independent componentsZ(i) : Ω Zi for i 1, . . . , d. Because the components of Z are independent, the probabilitydensity function µ(z) is the product of its marginals µ(z) µ1 (z(1))µ2 (z(2)) · · · µd (z(d)). Wenow consider Z as an uncertain input to model f and f (Z) as theR uncertain output. If fis ] f (z) µ(dz) and varianceR2Var[f (Z)] (f (z) E[f (Z)]) µ(dz) of f are finite, and f (z) may be expressed as the sumc 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

686E. QIAN, B. PEHERSTORFER, D. O’MALLEY, V. V. VESSELINOV, AND K. WILLCOXof functions of subsets of its inputs [17],Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php(1)f (z) f0 dXfi (z(i)) i 1X1 i j dfi,j (z(i), z(j)) · · · f1,2,.,d (z) Xfu (z(u)) ,u INwith I {1, . . . , d}, z(u) {z(i) : i u}, and the component functions fu : i u Zi Yfor u I. This expression is unique if we enforce the following orthogonality condition:Z(2)fu (z(u)) µj (dz(j)) 0 j u, u I.The decomposition given by (1) satisfying (2) is known as the ANOVA high-dimensionalmodel representation (ANOVA HDMR) because the orthogonality ensures f0 E[f (Z)] andE[fu (Z)] 0 for u I, u 6 , which allows the variance V σ 2 Var[f (Z)] to then bedecomposed as [50]ZVar[f (Z)] f 2 (Z) µ(dz) f02 dXVar[fi (Z(i))] i 1X1 i j dVar[fi,j (Z(i), Z(j))] · · · Var[f1,2,.,d (Z)].Sobol’ defined the sensitivity indices su Var[fu (Z(u))]/V for u I [50]. Of particularinterest are the Sobol’ indices for subsets u I with cardinality u 1 , which are theportions of the output variance that can be attributed to the influence of a single input alone,denoted(3)Vj Varµj [fj (Z(j))].We are also interested in the total variance contributed by the input Z(j), denotedXTj Varµu [fu (Z(u))].(4){u:j u}This allows us to define the Sobol’ main and total effect sensitivity indices for input j as thefraction of variance contributed by Z(j) alone and by the sum of contributions influenced byZ(j), respectively.Definition 2.1. The Sobol’ main effect sensitivity index for input j is given by(5)sj Varµj [fj (Z(j))]Vj ,VVar[f (Z)]j 1, . . . , d .Definition 2.2. The Sobol’ total effect sensitivity index for input j is given byPEµj̄ [Varµj [f (Z) Z(j̄)]]Tj{u:j u} Varµu [fu (Z(u))]t(6)sj ,VVar[f (Z)]Var[f (Z)]where j̄ denotes the set of all inputs excluding the jth input, i.e., j̄ I \ {j}.c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpMULTIFIDELITY GLOBAL SENSITIVITY ANALYSIS6872.2. Monte Carlo estimation of variance and sensitivity indices.Variance estimation. Let {z1 , . . . , zn } denote n N independent realizations of the inputZ. The sample mean and variance are given bynn1X1 XÊ (7)f (zi ) and V̂ (f (zi ) Ê)2 ,nn 1i 1i 1respectively. The variance estimator has expected value E[V̂ ] Var[f (Z)] and variance44Var[V̂ ] n1 (δ n 3n 1 σ ), where δ E[(f (Z) E[f (Z)]) ] is the fourth central moment of f [9].Sensitivity index estimation. The sensitivity indices in Definitions 2.1 and 2.2 are typicallyestimated via Monte Carlo integration, using “fixing methods” to estimate Vj and Tj . Thesemethods use a second set of n independent realizations of Z, denoted {z10 , . . . , zn0 }. Define(8)(j)yi (zi0 (1), . . . , zi0 (j 1), zi (j), zi0 (j 1), . . . , zi0 (d)).(j)The estimator for Vj (resp., Tj ) is the empirical covariance of the data set of f (yi ) and f (zi )(resp., f (zi0 )) pairs, and ŝj and ŝtj are obtained by normalizing by V̂ given by (7). The maineffect Sobol’ estimator is given by [50]n 1X(j)(9)V̂j,sobol f (zi )f yi Ê 2 ,ni 1with Ê given by (7). To estimate main effect sensitivities in d inputs using (9), we require(d 1) function evaluations per Monte Carlo sample, i.e., a total of n (d 1) evaluationsof f .Variants on the Sobol’ estimator exist, including the Saltelli estimator [43], which modifiesthe Sobol’ estimator (9) by dividing the sum by (n 1) rather than n, and the work byJanon et al. [22], which shows that replacing Ê with the alternative sample mean estimator(j)1 PnẼ 2ni 1 (f (zi ) f (yi )) lowers the variance of the sj estimator in the asymptotic limit.The bias of these estimators is of order O(1/n). In [36], Owen introduces a bias-correctedversion of the Janon estimator, given by !2n 002n 1 XÊ ÊV̂ V̂(j) ,(10)V̂j,owen f (zi )f yi 2n 1 n24ni 1where Ê, Ê 0 and V̂ , V̂ 0 are the sample means and variances, respectively, estimated usingz1 , . . . , zn and z10 , . . . , zn0 , respectively.To estimate Tj , the estimator introduced by Homma and Saltelli [18] is given by!n 1 X(j)02(11)T̂j,homma V̂ f (zi )f yi Ê .n 1i 1This estimator has an O(1/n) bias. Owen suggests an alternative estimator,n 21 X 0(j)(12)f (zi ) f yi,T̂j,owen 2ni 1which is an unbiased estimator of Tj [36].c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

688E. QIAN, B. PEHERSTORFER, D. O’MALLEY, V. V. VESSELINOV, AND K. WILLCOXDownloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpAlthough the Saltelli estimators are the most widely cited in the literature, the unbiasedalternatives are better suited to the multifidelity theory we develop in section 3. Giventhe popularity of the Saltelli estimators, however, we will present results for multifidelityestimators based both on the Saltelli estimators as well as on their unbiased alternatives.3. Multifidelity global sensitivity analysis approach. We now consider the multifidelitysetting where we have K models: In addition to our high-fidelity model f , which we willhence denote f (1) , we also have K 1 surrogate models f (k) for k 2, . . . , K. In contrastto multilevel methods, the surrogates f (k) are not limited to hierarchical discretizations butmay include projection-based reduced models, support vector machines, data-fit interpolationand regression, and simplified-physics models. This section introduces multifidelity estimatorsthat leverage all available models to provide efficient estimators for the variance and the Sobol’main and total effect indices.3.1. MFMC variance estimation. Let m [m1 , . . . , mK ] NK be a vector with m1 0(k)and ml mk for l k. We draw mK realizations of Z, denoted z1 , . . . , zmK Z. Let V̂ndenote the unbiased Monte Carlo sample variance of Var[f (k) (Z)] evaluated using the first nrealizations of z, given by (7). We can now introduce our multifidelity variance estimator.Proposition 3.1. The multifidelity variance estimator given by(13)V̂mf V̂m(1)1 KXk 2 (k) V̂αk V̂m(k)mkk 1is an unbiased estimator of Var[f (1) (Z)]. In (13), α2 , . . . , αK are control variate coefficientswhich will be determined by the model management strategy (see subsection 3.3).P(k)(1)Proof. It follows from linearity of expectation that E[V̂mf ] E[V̂m1 Kk 2 αk (V̂mk P(k)(1)(k)(k)(k)(k) (Z)] for n V̂mk 1 )] E[V̂m1 ] Kk 2 αk (E[V̂mk ] E[V̂mk 1 ]). Since E[V̂n ] Var[f(k)(k)2, E[V̂mk ] E[V̂mk 1 ] Var[f (k) (Z)] Var[f (k) (Z)] 0 for k 1, . . . , K. E[V̂mf (Z)] Var[f (1) (Z)] follows.(k)(k)Note that V̂mk reuses mk 1 function evaluations used to compute V̂mk 1 . We now prove alemma that will help us assess the quality of the MFMC variance estimator.(k)Lemma 3.2. Let V̂n (Z), n mK denote the Monte Carlo variance estimator computedwith model f (k) at the first n input realizations in the set {zi }i N,1 i mK . Without loss of(k)(l)generality, let m n. Then the covariance of two estimators V̂m (Z) and V̂n (Z) is given asfollows for 1 k, l K:(12(qk,l τk τl m 1ρ2k,l σk2 σl2 ) if k 6 l(14)Cov[V̂m(k) (Z), V̂n(l) (Z)] m1m 3 4if k l,m (δk m 1 σk )(k)(l)(Z)]where ρk,l Cov[f σ(Z),f, the Pearson product-moment correlation coefficient betweenk σl(k)(l)f (Z) and f (Z); σk is the standard deviation of f (k) (Z); δk is the fourth moment ofc 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

MULTIFIDELITY GLOBAL SENSITIVITY ANALYSISf (k) (Z);qk,l 689τk is the standard deviation of g (k) (Z)Cov[g (k) (Z),g (l) (Z)]τk τl .(f (k) (Z) E[f (k) (Z)])2 ;andDownloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpProof. Using kβ f (k) (zβ ) and lγ f (l) (zγ ) as shorthand, note thatCov[V̂m(k) (Z), V̂n(l) (Z)] m Xm Xn XnX1Cov[(ka kb )2 , (lc ld )2 ].4m(m 1)n(n 1) a 1c 1b 1d 1Cov[(ka kb )2 , (lc ld )2 ].Let χabcd If {a, b} {c, d} or if a b or c d, then χabcd 0.Otherwise, if {a, b} {c, d}, then χabcd 2qk,l τk τl 4ρ2k,l σk2 σl2 . There are 2n(n 1) suchterms. Otherwise, when {a, b} {c, d} 1, χabcd qk,l τk τl . There are 4n(n 1)(m 2) suchterms, so2n(n 1)(2qk,l τk τl 4ρ2k,l σk2 σl2 ) 4n(n 1)(m 2)qk,l τk τl4m(m 1)n(n 1) 1222 2 qk,l τk τl ρ σ σ .mm 1 k,l k lCov[V̂m(k) (Z), V̂n(l) (Z)] (15)When k l, note that (15) can be rewritten asqk,k 1, and τk2 σk4 δk .1m (qk,l τk τl22 2 ρ2k,l σk2 σl2 (m 3)m 1 ρk,l σk σl ), ρk,k We can now make a statement about the quality of the MFMC estimator.Theorem 3.3. The variance of the MFMC variance estimator (13) is given by(16)Var[V̂mf (Z)] X K1m1 3 41mk 1 3 41mk 3 4σ1 σk σkδ1 αk2δk δk m1m1 1mk 1mk 1 1mkmk 1k 2 KX112222 222 2αk 2ρ σ σ ρ σ σ.q1k τ1 τk q1k τ1 τk mkmk 1 1k 1 kmk 1mk 1 1 1k 1 kk 2Proof. The variance of a sum of random variables is the sum of their covariances:Var[V̂mf (Z)] Var[V̂m(1)] 1KX αk2 Var[V̂m(k)] Var[V̂m(k)]kk 1k 2 2KXk 2(]) 2KX αk Cov[V̂m(1), V̂m(k)] Cov[V̂m(1), V̂m(k)]11kk 1αkk 2([) 2 2KXk 2KXKXj k 1αkKXj k 1 αj Cov[V̂m(k), V̂m(j)] Cov[V̂m(k), V̂m(j)]jj 1kk αj Cov[V̂m(k), V̂m(j)] Cov[V̂m(k), V̂m(j)]jj 1k 1k 1αk2 Cov[V̂m(k), V̂m(k)].kk 1k 2Using the covariances from Lemma 3.2, since m1 · · · mK , the covariance terms in ] and[ will cancel, and Theorem 3.3 follows.c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php690E. QIAN, B. PEHERSTORFER, D. O’MALLEY, V. V. VESSELINOV, AND K. WILLCOX3.2. Multifidelity sensitivity index estimation. We draw a second set of mK realizations(j)(j)0of Z, denoted z10 , . . . , zm Z. Let yi be defined as in subsection 2.2; i.e., yi is the ithKrealization of this second set, zi0 , whose jth component has been replaced by the jth component(j)of zi , the ith input realization in the original set. For each j, the set {yi : i 1, . . . , mK } isused to estimate the sensitivity index corresponding to the jth input variable.(k)(k)Let V̂j,n and T̂j,n denote the estimators of Vj and Tj given by (10) and (12), respectively,evaluated using model f (k) at the first n pairs (z, y (j) ). We now introduce our multifidelitysensitivity index estimators.(k)(k)Theorem 3.4. Using the Owen estimators for V̂j,n and T̂j,n given by (10) and (12), themultifidelity estimators for Vj and Tj given by(17)(1)V̂j,mf V̂j,m1 KXk 2 (k)(k)αk V̂j,mk V̂j,mk 1and(18)(1)T̂j,mf T̂j,m1 KXk 2 (k)(k)αk T̂j,mk T̂j,mk 1are unbiased estimators of Vj and Tj , where α2 , . . . , αK are control variate coefficients.Since (10) and (12) are unbiased [36], the proof is analogous to that of Proposition 3.1. Wecan then evaluate ŝj,mf V̂j,mf /V̂mf and ŝtj,mf T̂j,mf /V̂mf . These ratios of estimators arebiased estimators for the sensitivity indices sj and stj . However, since V̂j,mf , T̂j,mf , and V̂mf areall unbiased estimators, our sensitivity index estimator is consistent with the best practicesin Monte Carlo Sobol’ index estimation [36, 43].We note that the Saltelli estimators could also be used within (17) and (18) to createa “Saltelli-based” multifidelity estimator. Given the popularity of the Saltelli estimators, wepresent results for both the Owen-based and the Saltelli-based formulations in sections 4 and 5.(1)However, since the expectation of the multifidelity estimator is the expectation of V̂j,m1 (resp.,(1)T̂j,m1 ), the O(1/n) bias of the Saltelli estimators will be preserved in a Saltelli-based multifidelity formulation and may in fact be exacerbated by the fact that m1 is ideally a smallnumber.3.3. Model management. Let wk , k 1, . . . , K denote the time it takes to evaluate f (k)once, and let p R be our computational budget. We are interested in determining thecoefficients αi and the numbers of model evaluations m so as to efficiently use the computational time available to us. In [38], analytical αk and mk assignments that minimize the MSEof the MFMC mean estimate given a fixed computational budget are presented. This resultis restated in Theorem 3.5.Theorem 3.5. If the K models f (1) , . . . , f (K) satisfy ρ1,1 · · · ρ1,K and have costssatisfyingρ21,k 1 ρ21,kwk 1 2,wkρ1,k ρ21,k 1c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

MULTIFIDELITY GLOBAL SENSITIVITY ANALYSIS691 ]T are given byfor k 2, . . . , K, and the components of r [r1 , . . . , rKsDownloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phprk w1 (ρ21,k ρ21,k 1 )wk (1 ρ21,2 ),ρσ1and m 1 wTpr ,then, given a maximum computational budget p, the assignments αk 1,kσk mk m1 rk for k 2, . . . , K minimize the MSE of the multifidelity mean estimator given by(1)Ê M F Êm 1(19)KXk 2 (k)(k)αk Êm Êmk 1 ,k(k)where Êmk is the sample mean computed using model k and the first mk samples in themultifidelity approach.For variance estimation, we formulate an optimization problem to minimize the MSE of theMFMC variance estimate:(20a)(20b)minα2 ,.,αK ,m1 ,.,mKVar[V̂mf ]s.t. 0 m1 m2 · · · mK andKXk 1wk mk p.Solving the nonlinear optimization problem (20) yields optimal αi and mi assignments. Inpractice, the model statistics (ρ1,k , σk , δk , q1,k , τk ) which enter into the objective function (16)are unknown and must be estimated, making (20) an optimization under uncertainty. Oneway to treat this is to estimate these model statistics in a pilot run with a small numberof samples. However, the resultant model allocation may be sensitive to variation in theseestimates since the objective function (20a) has fourth-order dependencies on these statistics.When estimates for Sobol’ indices are also desired, in principle a similar optimization problemcould be formulated, but the difficulties resulting from unknown model statistics are compounded both by the fact that statistics would need to be estimated for individual terms ofthe ANOVA decomposition (1) and by the fact that the optimization becomes multiobjective.Finding a different optimal allocation for each sensitivity index is likely to be sensitive tostatistic estimates and therefore impractical.In contrast, the Theorem 3.5 allocation depends only on ρ1,k and σk , and the dependenciesare at most second-order. Additionally, the work [38] has demonstrated that this allocationis insensitive to estimates of unknown model statistics. In application, mean estimates areoften desired in addition to variance and sensitivity estimates. It is thus both more practical and more robust to use the same mean-optimal Theorem 3.5 allocation to estimate thevariance and all sensitivity indices. Because each Monte Carlo sample for sensitivity indexestimation requires (d 2) function evaluations (of which two evaluations are independentand may be used for mean and variance estimation), we use Theorem 3.5 with an effectivebudget peff p/(d 2). This effective budget is then distributed across the available models,and the same set of samples is used to estimate the mean, variance, and sensitivity indices.c 2018 Elizabeth Qian, Benjamin Peherstorfer, Daniel O’Malley, Velimir Vesselinov, and Karen Willcox

Downloaded 11/27/18 to 192.12.184.6. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php692E. QIAN, B. PEHERSTORFER, D. O’MALLEY, V. V. VESSELINOV, AND K. WILLCOXWe will show in section 4 that the reduction in mean-squared error (MSE) achieved by usingthis heuristic allocation is comparable to that obtained by solving (20). Our recommendation for practice is therefore to use the Theorem 3.5 heuristic allocation for the estimation ofvariance and sensitivity indices.Remark. We have noted that our proposed framework can accommodate surrogate modelsof any type. For some models, such as polynomial chaos expansions and Gaussian processes,the variances Vj (5) and Tj (6) can be analytically determined. If such a model were theKth (lowest-fidelity) model, we could take advantage of this by using the analytical values(K)(K)(K)(K)(K)for V̂mK , V̂j,mK , and T̂j,mK while still sampling the model for the estimates V̂mK 1 , V̂j,mK 1 ,(K)and T̂j,mK 1 . This would free some portion of the budget to allow additional higher-fidelityevaluations.4. Analytical example. We first demonstrate our method on an analytical example forwhich model statistics are known. This allows us to validate the theory developed in subsection 3.1 and to compare the two suggested model management approaches for multifidelityvariance estimation and sensitivity analysis.4.1. Ishigami function and models. The Ishigami function was first introduced in [21]and has been frequently used to test methods for sensitivity analysis and uncertainty quantification [18, 42, 48]. The function is given by(21)f (Z) sin Z1 a sin2 Z2 bZ34 sin Z1 ,Zi U( π, π)and has ANOVA HDMR decomposition f (Z) f0 f1 (Z1 ) f2 (Z2 ) f13 (Z1 , Z3 ) with π4f0 a/2,f1 (Z1 ) 1 bsin Z1 ,5 π424.f2 (Z2 ) a sin Z2 a/2, and f13 (Z1 , Z3 ) b sin Z1 Z3 5248 2The variance is Var[f (Z)] 12 a8 π5 b π18b , and the component variances are V1 1π4 2a218 2 12 (1 b 5 ) , V2 8 , V13 π b ( 18 50 ). We set the constants a 5 and b 0.1 asin

Key words. multi delity, Monte Carlo, global sensitivity analysis AMS subject classi cations. 62P30, 65C05 DOI. 10.1137/17M1151006 1. Introduction. Sensitivity analysis plays a central role in modeling and simulation to support decision making, providing a rigorous basis on which to characterize how input un-certainty contributes to output .

Related Documents:

The Markov Chain Monte Carlo Revolution Persi Diaconis Abstract The use of simulation for high dimensional intractable computations has revolutionized applied math-ematics. Designing, improving and understanding the new tools leads to (and leans on) fascinating mathematics, from representation theory through micro-local analysis. 1 IntroductionCited by: 343Page Count: 24File Size: 775KBAuthor: Persi DiaconisExplore furtherA simple introduction to Markov Chain Monte–Carlo .link.springer.comHidden Markov Models - Tutorial And Examplewww.tutorialandexample.comA Gentle Introduction to Markov Chain Monte Carlo for .machinelearningmastery.comMarkov Chain Monte Carlo Lecture Noteswww.stat.umn.eduA Zero-Math Introduction to Markov Chain Monte Carlo .towardsdatascience.comRecommended to you b

Quasi Monte Carlo has been developed. While the convergence rate of classical Monte Carlo (MC) is O(n¡1 2), the convergence rate of Quasi Monte Carlo (QMC) can be made almost as high as O(n¡1). Correspondingly, the use of Quasi Monte Carlo is increasing, especially in the areas where it most readily can be employed. 1.1 Classical Monte Carlo

Fourier Analysis of Correlated Monte Carlo Importance Sampling Gurprit Singh Kartic Subr David Coeurjolly Victor Ostromoukhov Wojciech Jarosz. 2 Monte Carlo Integration!3 Monte Carlo Integration f( x) I Z 1 0 f( x)d x!4 Monte Carlo Estimator f( x) I N 1 N XN k 1 f( x k) p( x

Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution - to estimate the distribution - to compute max, mean Markov Chain Monte Carlo: sampling using "local" information - Generic "problem solving technique" - decision/optimization/value problems - generic, but not necessarily very efficient Based on - Neal Madras: Lectures on Monte Carlo Methods .

vi Equity Valuation 5.3 Reconciling operating income to FCFF 66 5.4 The financial value driver approach 71 5.5 Fundamental enterprise value and market value 76 5.6 Baidu’s share price performance 2005–2007 79 6 Monte Carlo FCFF Models 85 6.1 Monte Carlo simulation: the idea 85 6.2 Monte Carlo simulation with @Risk 88 6.2.1 Monte Carlo simulation with one stochastic variable 88

Electron Beam Treatment Planning C-MCharlie Ma, Ph.D. Dept. of Radiation Oncology Fox Chase Cancer Center Philadelphia, PA 19111 Outline Current status of electron Monte Carlo Implementation of Monte Carlo for electron beam treatment planning dose calculations Application of Monte Carlo in conventi

J.S. Liu and R. Chen, Sequential Monte Carlo methods for dynamic systems , JASA, 1998 A. Doucet, Sequential Monte Carlo Methods, Short Course at SAMSI A. Doucet, Sequential Monte Carlo Methods & Particle Filters Resources Pierre Del Moral, Feynman-Kac

Poor strategic asset management decisions may result in wasted resources and other, often longer term, inefficiencies. Strategic asset management can be a creative process and the best social landlords will be focussed on exactly the most important asset management issues. Good strategic asset management is core business for governing bodies of Registered Social Landlords (RSLs) and council .