Quantification Of Uncertainty In Probabilistic Storm Surge Models

1y ago
3 Views
2 Downloads
5.25 MB
46 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Rosa Marty
Transcription

2nd Annual NRC PFHA Research Program WorkshopNorth Bethesda, MD – January 23-25, 2017Quantification of Uncertainty inProbabilistic Storm Surge ModelsNorberto C. Nadal-Caraballo, Ph.D.Team:Victor Gonzalez, P.E.Jeffrey A. Melby, Ph.D.Amanda B. LewisEfrain Ramos-SantiagoCoastal and Hydraulics LaboratoryUS Army Engineer R&D CenterNorberto.C.Nadal-Caraballo@usace.army.mil

Quantification of Uncertainty inProbabilistic Storm Surge Models Background The present study is part of U.S. NRC’s Probabilistic Flood HazardAssessment (PFHA) research plan.Support risk-informed licensing and oversight guidance and tools forassessment of flooding hazards at nuclear powers.Evaluate uncertainty associated with data, models, and methodsassociated with probabilistic storm surge models used for coastalflood hazard assessment.Storm Surge hazard expressed as a family of hazard curvesrepresenting epistemic uncertainty.Annual exceedance probabilities (AEPs) of interest for nuclear powerplants, including AEPs that go beyond the state-of-practice for floodmapping (e.g., 10-4 to 10-6).BUILDING STRONG 2

Quantification of Uncertainty inProbabilistic Storm Surge Models Computation of Storm Surge Hazard The estimation of storm surge hazard using historical observations inhurricane-prone areas is limited by a lack of adequate data.This has led to the development of methods that rely on the statisticalcharacterization of the tropical cyclone (TC) forcing and subsequentmodeling of storm surge response.These methods have evolved into sophisticated joint probabilityapproaches that allow for comprehensive quantification of uncertainty.Joint probability method with optimal sampling (JPM-OS) has becomethe standard-of-practice for quantifying storm surge hazard in coastalareas affected by TCs.Other methods include global climate models (GCM) with downscaling,and Monte Carlo Simulation (MCS) methods.BUILDING STRONG 3

General Overview and Logic Tree ApproachJPM Integral𝜆𝜆𝑟𝑟𝑥𝑥 𝑟𝑟 𝜆𝜆 𝑃𝑃 𝑟𝑟 𝑥𝑥 𝜀𝜀 𝑟𝑟 𝑥𝑥, 𝜀𝜀 𝑓𝑓𝑥𝑥 𝑥𝑥 𝑓𝑓𝜀𝜀 𝜀𝜀 𝑑𝑑𝑥𝑥𝑑𝑑𝑑𝑑 JPM 𝑛𝑛𝑖𝑖 𝜆𝜆𝑖𝑖 𝑃𝑃 𝑟𝑟 𝑥𝑥 𝜀𝜀 𝑟𝑟 𝑥𝑥, 𝜀𝜀where:𝜆𝜆𝑟𝑟 𝑥𝑥 𝑟𝑟 AEP of TC response r due toProbabilisticforcing vector 𝑥𝑥 Storm Surge𝑥𝑥 𝑓𝑓 𝑥𝑥𝑜𝑜 , 𝜃𝜃, 𝑝𝑝, 𝑅𝑅𝑚𝑚𝑚𝑚𝑚𝑚 , 𝑉𝑉𝑡𝑡Modelλ SRR (storms/yr/km)λi probability mass (storms/yr) or λ 𝑝𝑝𝑖𝑖 ,with 𝑝𝑝𝑖𝑖 product of discrete probabilityand TC track spacing (km)𝑃𝑃 𝑟𝑟 𝑥𝑥 𝜀𝜀 𝑟𝑟 𝑥𝑥, 𝜀𝜀 conditionalprobability that storm 𝑖𝑖 with parameters𝑥𝑥 𝑖𝑖 generates a response larger than r𝜀𝜀 unbiased error of rBUILDING STRONG sponseSurfaceStochasticTrack ametricMCLCMonte CarloSimulation4MCIntegrationNonParametric

Probabilistic Storm Surge Model DescriptionExample: JPM The application of each surge models typically involvesthe implementation of several types of analyses. For each type of analysis several approaches may beavailable. Using JPM as ntegrationJPM-ReferenceObserved(HURDAT)Models (GKF,UKF, EKF)ParameterizationJPM-OS(RS, BQ, Hybrid)ReanalysisJPM-STMSynthetic(GCM, STM)Screening(period ofrecord,intensity)Statistical approach(parametric vs. nonparametric)Distribution errorJPM integral(standarddiscretization,random sampling,Gaussianredistribution)AllTask 2Task 3Task 5Task 4BUILDING STRONG 5Dependencies

Project Tasks Task 1Task 2 Task 3 Task 4 Task 5 Task 6Task 7Task 8Literature ReviewInvestigation of Epistemic Uncertainties in StormRecurrence Rate ModelsExplore Technically Defensible Data, Models, andMethods for Defining Joint Probability of StormParametersExplore Technically Defensible Models and Methodsfor Generating Synthetic Storm Simulation SetsInvestigate Approaches for Probabilistic Modeling ofNumerical Surge Simulation ErrorsSynthesisTransfer of KnowledgeFinal Report PreparationBUILDING STRONG 6

Classification of Uncertainty Two classifications of uncertainty typically recognized: Aleatory – natural randomness of a process; not reducibleEpistemic – lack of knowledge about validity of models and datafor the representation of real system; can be reduced. Classification scheme can be subjective. Traditional uncertainty classification in JPM-OS models: The epistemic uncertainty is related to the specific methods andmodels used in each study.Limited to the inclusion of uncertainty as an error term in theJPM integral. e.g., meteorological modeling, hydrodynamic modeling, idealized storm trackvariation, and limited variation in wind and pressure profiles.BUILDING STRONG 7

Classification of Uncertainty (cont.) Treatment of uncertainty in present study: Follows probabilistic seismic hazard analysis (PSHA).Differences between a given numerical model and the naturalphenomenon is prevalent (error term) aleatoryIt is in the selection and application of alternative data, methods,and models that the uncertainty can be reduced epistemicEpistemic uncertainty is quantified and propagated through logictree approach.General study objectives regarding uncertainty: Identification of technically defensible data sources, models, and methods. Assess whether estimates derived from different data, models, and methodsneed to be carried forward for evaluation of epistemic uncertainty, discardingthose not considered technically defensible.BUILDING STRONG 8

Task 2: Epistemic Uncertainty in SRR Models Task Description Data sources and methods used for the computation of sitespecific storm recurrence rate (SRR) models.Topics: Technically defensible data sources for use in site specific studies(e.g., NOAA’s HURDAT). Appropriate models for estimation of SRR, e.g., Gaussian kernelfunction (GKF), uniform kernel function (UKF), and Epanechnikovkernel function (EKF). Methods for screening historical data and assessing geographicvariation in support of site-specific estimation of SRR (e.g., selectionof historical period of record and TC binning by intensity). Investigate SRR aleatory uncertainty and whether SRR estimatesderived from multiple datasets or methods need to be propagated.BUILDING STRONG 9

What is an Storm Recurrence Rate ? SRR Definition Measure of the frequency with which a particular location is expected tobe affected by TCs.Typically expressed in units of storms per year per unit distance alongthe shoreline (e.g., storms/yr/km).Can also be stated as number of storms per year passing within aradius of x km, e.g., SRR200 km (storms/yr).BUILDING STRONG 10

Task 2: Epistemic Uncertainty in SRR Models Numerical Experiments Uncertainty comparison and quantification of three models for theestimation of SRR. Uniform Kernel unction (capture zone approach) Gaussian Kernel Function (Chouinard et al.1997) Epanechnikov Kernel Function SRR variability related to selection optimal kernel size.SRR variability arising from selection of the period of record.SRR variability through the analysis of subsets of data throughbootstrap resampling.Observation or measurement uncertainty in TC data.Effect of data partition (by TC intensity) on SRR uncertainty.BUILDING STRONG 11

Models for Estimation of SRR Comparison of Kernel Functions UKF estimates tend to be unstable and highly sensitive to data clusters.GKF exhibits highest smoothing while EKF curve is closer to UKF curve.GKF considers storms past the kernel size distance. Optimal kernel sizeshould be large enough to maximize use of data while avoidingsampling from multiple TC populations.BUILDING STRONG 12

Task 2: Epistemic Uncertainty in SRR Models Findings The GKF is a better method for estimating SRR compared to UKF(capture zone). GKF can consider larger number of storms than the UKF model. For same ranges of optimal uniform and Gaussian kernel sizes, GKFestimates exhibited reduced coefficient of variation (CV) when compared toUKF. The lowest SRR uncertainty where observed in North Carolina, Florida,Mississippi and Louisiana while the U.S. coast north of Virginiaexhibited the largest uncertainties.Typically, larger samples result in a reduction of uncertainty andtherefore in reduced sensitivity to model decisions.BUILDING STRONG 13

Task 2: Epistemic Uncertainty in SRR Models Findings (cont.) SRR200km for low, high, and critical intensity TCs. Low intensity (28-48 hPa), high intensity (48-68h Pa), critical intensity ( 68 hPa)BUILDING STRONG 14

Task 2: Epistemic Uncertainty in SRR Models Findings (cont.) Variation of SRR200 km with record length.BUILDING STRONG 15

Task 2: Epistemic Uncertainty in SRR Models Findings (cont.) Total uncertainty was calculated as In general, sampling uncertainty was the main contributor to totaluncertainty, followed by period of record selection; observationaluncertainty was the lowest contributor.Type of UncertaintyPercent of TotalUncertaintyΔp 28 hPaPercent of TotalUncertainty28 hPa Δp 48hPaPercent of TotalUncertainty48 hPa Δp 68hPaPercent of TotalUncertaintyΔp 68 hPaSampling uncertainty65627175Period of record1912127Gaussian kernel size151434Observational data1121414BUILDING STRONG 16

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Task Description Identification of technically defensible TC parameter datasources, screening methods, and parameterization schemes fordevelopment of probability distribution.Topics: Technically defensible data sources for use in site specific studies,including observational, reanalysis, and synthetic data sources. Data screening methods for development of probability distributions,and evaluation criteria for selecting TCs from historical records or asynthetic datasets. Selection of probability distribution, associated uncertainties,parameter correlations, and adequacy of forcing parameters. Identification of alternate data and methods, and evaluate which ofthese need to be considered to account for epistemic uncertainty.BUILDING STRONG 17

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Example of Logic Tree ApproachBUILDING STRONG 18

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Numerical Experiments Basic unit of analysis: fitting of univariate distributions to TCparameters: 𝜃𝜃, 𝑝𝑝, 𝑅𝑅𝑚𝑚𝑚𝑚𝑚𝑚 , 𝑉𝑉𝑡𝑡The basic analysis (3,500 fits) was performed to evaluatedifferent methods, models, and data represented in the logictree branches: Parameterization: standard (Δp) or alternate (Wmax )Data source: HURDAT2, GCM synthetics, EBTRK reanalysisLandfalling or bypassingStatistical models: parametric, non-parametricAnalysis by intensity: all TCs, high intensity, and low intensity.Assessment of fits: goodness-of-fit tests, RMSD, magnitude ofsampling uncertainty, visual inspection of plots.BUILDING STRONG 19

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Parametric Distributions TC parameters were fit using Generalized Extreme Value,Generalized Pareto, Gumbel, Normal, Lognormal, Weibull,Gamma, and Exponential distributions; Δp example: BUILDING STRONG 20

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Comparison of Rmax probability distributionsEBTRK(Demuth et al. 2006)Vickery and Wadhera(2008) Stochastic ModelSimilar curves resultingfrom the Vickery modeland EBTRK reanalysis.GCM Downscaling(Lin et al. 2012)BUILDING STRONG 21GCM plot suggests thatextratropical transition ofTCs is not beingadequately represented.

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters Non-Parametric Distributions TC Parameters were fit using the following kernel functions: Normal Epanechnikov Uniform TriangularRmax data fromEBTRK Reanalysis(Demuth et al. 2006)BUILDING STRONG 22

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters General Findings The most relevant factor in choosing distribution type was howwell it described the low-frequency tails.More than one statistical distribution could be valid for a givenTC parameter.When more than one distribution was viable, a comparison of thefits usually did not reveal significant differences.The sampling technique used for the generation of synthetic TCsmay lessen the significance of selecting a given probabilitydistribution for large discretization intervals.The judgment of carry forward a given dataset, method, or modelwas found to be highly dependent on the location.BUILDING STRONG 23

Task 3: Data, Models and Methods for DefiningJoint Probability of Storm Parameters General Findings (cont.) The main contributor to uncertainty associated with theprobability distributions was the quantity of data. When TCs were classified as high and low intensity, uncertainty for highintensities TCs increased with latitude.High intensitylognormaldistribution fit ofRmax EBTRK datafor various Atlanticcoast locations.BUILDING STRONG 24

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Task Description Capture full range of technically defensible data and methods forgenerating synthetic storm sets required to fully characterize andpropagate uncertainties in storm surge estimates.Topics: Evaluation of discretization methods used to generate syntheticstorms for numerical storm surge modeling. Effect of the refinement of the discretization of the parameter spaceon uncertainty. Analyze the applicability of each method over a wide range ofconditions and evaluate whether criteria can be established toassess situations where one method is superior to others. Evaluate the merits of studied approaches and analyze whetherestimates derived from different methods need to be considered inthe quantification of the epistemic uncertainty.BUILDING STRONG 25

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Numerical Experiments Use results from North Atlantic Coast Comprehensive Study(Nadal-Caraballo et al. 2015) where hybrid JPM-OS was used. Compare to JPM-OS-RS and JPM-OS-BQ Generate JPM “Reference” set or “Gold Standard” as basis forcomparison with other methods and TC sets.Perform various Monte Carlo simulations for development ofstorm surge hazard curves: Monte Carlo Life-Cycle (MCLC) Monte Carlo Integration (MCI) Develop storm surge hazard curve using TC parameter set fromexisting GCM downscaling study performed by Lin et al. 2012 forThe Battery, NY.BUILDING STRONG 26

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Reference Set A large set of synthetic storms was generated to represent thetraditional JPM approach.A Gaussian process metamodel (GPM) (Jia et al. 2016) wasused to develop tens of hundreds of TCs. The GPM is conceptually similar to the RS approach where theinitial discretization of the joint probability distribution is refined byregression or interpolation of storm surge from additional TCparameter combinations. The GPM used in this study was trained using the 1050 syntheticTCs developed as part of the NACCS (Nadal-Caraballo et al. 2015). A total of 74,430 TCs were generated based on refineddiscretization and using unique parameter combinations.BUILDING STRONG 27

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Reference Set – Hazard CurveBUILDING STRONG 28

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Hybrid JPM-OSMethodology(NACCS) Uniformdiscretization: p and θBayesianQuadrature (BQ)Rmax and Vt.BUILDING STRONG 29

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Monte Carlo Life-Cycle Univariate distributions of TC parameters were sampled for a1,000,000-yr period, which resulted in 200,000 TCs.No probability masses are required for the TCs since they aresampled based on their likelihood of occurrence and the jointprobability of their parameters.Responses were evaluated through the GPM previouslydeveloped for the JPM Reference set.The storm surge hazard curve consists of the resulting empiricaldistribution (Weibull plotting position).A bootstrap resampling procedure using replicated storm surgevalues with added discretized uncertainty was used to calculatemean hazard curve and to account for uncertainty.BUILDING STRONG 30

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets MCLC – Hazard CurveBUILDING STRONG 31

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Monte Carlo Integration (MCI) (Wyncoll and Gouldby 2015) Probabilities are calculated as the percent of TCs with responsegreater than a set of surge elevation bins. No probability massesare were used. 𝑃𝑃 𝐶𝐶 𝑐𝑐 𝑐𝑐 λ, where 𝐿𝐿𝑐𝑐 is the number of Monte Carlo𝐿𝐿realizations that exceed 𝑐𝑐, 𝐿𝐿 is the total number of Mote Carlorealizations, and λ is the sample intensity (storms/yr).BUILDING STRONG 𝐿𝐿32

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets MCI – Hazard CurveBUILDING STRONG 33

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets GCM Downscaling (Lin et. al 2012) Storm surges are determined using GCM-drivenstatistical/deterministic hurricane model with hydrodynamicsurge models.Synthetic TCs tracks are generated according to large-scaleatmospheric and ocean environments rather than historical TCs.The data set consists of 1,470 tracks out of an original number of5,000 tracks modeled covering a time period from 1970-2010.The storm surge responses were simulated from the forcingparameters of GCM tracks and used as input to the GPMpreviously trained with NACCS results.Stochastic simulation technique (SST) consisting of combinedempirical and GPD fits was applied to the storm surge values.BUILDING STRONG 34

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Comparison of Hazard Curves GCM Downscaling results published by Lin et al. 2012TCs simulated from GCM forcing, using NACCS-trained GPM.BUILDING STRONG 35

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets GCM Downscaling – Hazard CurveBUILDING STRONG 36

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Comparison of ResultsStorm Surge (m) at The Battery, NYAnnual Exceedance Probability 0 TCs)3.75.46.47.07.5JPM-OS (1,050 TCs)3.04.76.26.97.5MCLC (211,997 TCs)3.34.85.96.77.5MCI (211,997 TCs)3.34.85.96.78.3GCM Downscaling(1,470 TCs)1.83.14.14.95.5All surge only, with uncertainty max(20%, 0.61m)BUILDING STRONG 37

Task 4: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Comparison of ResultsPercentage Difference in Storm Surge at The Battery, NYAnnual Exceedance Probability 0 TCs)-----JPM-OS (1,050 TCs)-18-13-3-20MCLC (211,997 TCs)-9-10-8-40MCI (211,997 TCs)-9-10-8-510GCM Downscaling(1,470 TCs)-50-42-35-30-26All surge only, with uncertainty max(20%, 0.61m)BUILDING STRONG 38

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Task Description (in progress) Capture full range of technically defensible data and methods forgenerating synthetic storm sets required to fully characterize andpropagate uncertainties in storm surge estimates.Topics: Evaluation of methods for distribution uncertainty. Effect of neglecting to include the error term. The error associated with exclusion, or simplified inclusion, of termsfrom the JPM‐OS integral to reduce dimensionality Errors due to the lack of skill of numerical meteorological and stormsurge modeling. Evaluation of alternate methods for distributing the uncertainty in thejoint probability integral.BUILDING STRONG 39

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Numerical Experiments Compare different approaches to the characterization ofuncertainty Constant uncertainty (e.g. 0.61m) Proportional uncertainty (e.g. 20%) Combined constant and proportional uncertainty [ e.g., min (20%, 0.61m) ] Two basic discretization approaches for the uncertainty will be tested forJPM and for Monte Carlo simulation methods. Representation of Gaussian distribution using X number discrete values andreplicating storm surges X times, prior to performing JPM integration. Randomly sampling X values from the Gaussian distribution and repreatprocess stated above. The significance of different number of discrete values (or randomsamples) from the Gaussian distribution will be evaluated by comparingresults using 30, 100, 300,1000, and 3000 values.BUILDING STRONG 40

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Results – Characterization of UncertaintyStorm Surge (m) at The Battery, ax(20%, 0.61m)3.04.76.26.97.5Constant (0.61m)3.14.76.26.97.5Proportional (20%)3.04.76.78.09.0Annual Exceedance Probability (AEP)Percentage Difference in Storm Surge at The Battery, ax(20%, 0.61m)-----Constant (0.61m)40000Proportional (20%)0181520BUILDING STRONG Annual Exceedance Probability (AEP)41

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Results – Characterization of UncertaintyConstant Uncertainty 0.61 mUncertainty min(20%, 0.61m)BUILDING STRONG Proportional Uncertainty 20%42

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Results – Integration of UncertaintyStorm Surge (m) at The Battery, iscrete (444 values)3.04.76.26.97.5Discrete (30 values)3.04.76.27.0NaNDiscrete (3,000 values)3.04.76.26.97.4Random (444 values)3.14.86.37.17.7Random (30 values)2.94.76.17.5NaNRandom (3000 values)3.04.76.16.97.4SurgeStat “Redistribution”(FEMA)3.04.76.26.97.5BUILDING STRONG Annual Exceedance Probability (AEP)43

Task 5: Data, Models, and Methods forGenerating Synthetic Storm Simulation Sets Results – Integration of UncertaintyPercentage Difference in Storm Surge at The Battery, iscrete (444 values)-----Discrete (30 values)0001NaNDiscrete (3,000 values)0000-1Random (444 values)22122Random (30 values)-30-18NaNRandom (3000 values)-1-10-1-1SurgeStat “Redistribution”(FEMA)0000-1BUILDING STRONG Annual Exceedance Probability (AEP)44

References Chouinard, L., C. Liu, and C. Cooper. 1997. Model for Severity of Hurricanes in Gulf of Mexico.Journal of Waterway, Port, Coastal, and Ocean Engineering 123 (3): 120–129.Demuth, J., M. DeMaria, and J.A. Knaff, 2006: Improvement of advanced microwave sounder unittropical cyclone intensity and size estimation algorithms. Journal of Applied Meteorology andClimatology, 45: 1573-1581.Jia, Gaofeng, A. A. Taflanidis, N. C. Nadal-Caraballo, J. A. Melby, A. B. Kennedy, and J. M.Smith. 2016. Surrogate Modeling for Peak or Time-Dependent Storm Surge Prediction over anExtended Coastal Region Using an Existing Database of Synthetic Storms. Natural Hazards 81(2): 909–938.Lin, Ning, K Emanuel, M. Oppenheimer, and E. Vanmarcke. 2012. Physically Based Assessmentof Hurricane Surge Threat under Climate Change. Nature Climate Change 2 (6): 462–467.Nadal‐Caraballo, N.C., J.A. Melby, V.M. Gonzalez, and A.T. Cox. 2015. North Atlantic CoastComprehensive Study – Coastal Storm Hazards from Virginia to Maine. ERDC/CHL TR-15-5.Vicksburg, MS: U.S. Army Engineer Research and Development Center.Vickery, P.J., and D. Wadhera. 2008. Statistical Models of Holland Pressure Profile Parameterand Radius to Maximum Winds of Hurricanes from Flight-Level Pressure and H*Wind Data.Journal of Applied Meteorology and Climatology 47(10): 2497-2517.Wyncoll, D., and B. Gouldby. 2015. Integrating a Multivariate Extreme Value Method within aSystem Flood Risk Analysis Model. Journal of Flood Risk Management 8 (2): 145–160.BUILDING STRONG 45

Contact InformationU.S. Army Engineer R&D CenterCoastal and Hydraulics LaboratoryNorberto C. Nadal-Caraballo, Ph.D.Phone: (601) 634-2008Email: Norberto.C.Nadal-Caraballo@usace.army.milU.S. Nuclear Regulatory CommissionJoseph F. Kanney, Ph.D.Phone: (301) 980-8039Email: Joseph.Kanney@nrc.govBUILDING STRONG 46

Statistical models: parametric, non-parametric Analysis by intensity: all TCs, high intensity, and low intensity. Assessment of fits: goodness -of-fit tests, RMSD, magnitude of sampling uncertainty, visual inspection of plots. 19

Related Documents:

1.1 Measurement Uncertainty 2 1.2 Test Uncertainty Ratio (TUR) 3 1.3 Test Uncertainty 4 1.4 Objective of this research 5 CHAPTER 2: MEASUREMENT UNCERTAINTY 7 2.1 Uncertainty Contributors 9 2.2 Definitions 13 2.3 Task Specific Uncertainty 19 CHAPTER 3: TERMS AND DEFINITIONS 21 3.1 Definition of terms 22 CHAPTER 4: CURRENT US AND ISO STANDARDS 33

Uncertainty Quantification (UQ) reveals the reliability of all possible reconstructions. Uncertainty Quantification (UQ) is based on Bayesian statistics. Instead of producing a single solution (i.e., x A-1. b) we obtain the . distribution (the posterior) of all possible solutions.

Takeoff to the Next Level with Navisworks Quantification 3 Exercise 1 - Locate the Quantification Palette 1. Open Autodesk Navisworks Manage or Simulate 2016 and load the Autodesk Hospital_Quantification.nwf file 2. Locate the Quantification Tool on the home Tab 3. Open and Pin the Quantification palette at the bottom of the screen

73.2 cm if you are using a ruler that measures mm? 0.00007 Step 1 : Find Absolute Uncertainty ½ * 1mm 0.5 mm absolute uncertainty Step 2 convert uncertainty to same units as measurement (cm): x 0.05 cm Step 3: Calculate Relative Uncertainty Absolute Uncertainty Measurement Relative Uncertainty 1

fractional uncertainty or, when appropriate, the percent uncertainty. Example 2. In the example above the fractional uncertainty is 12 0.036 3.6% 330 Vml Vml (0.13) Reducing random uncertainty by repeated observation By taking a large number of individual measurements, we can use statistics to reduce the random uncertainty of a quantity.

Parametric Epistemic Uncertainty Epistemic uncertainty: Uncertainty due to lack of knowledge Parametric form and model-form Reducible Probabilistic approaches require the distribution function of Z F Z (s) Prob(Z s),s2 Rnz requires a large amount of information/data, and

Uncertainty in volume: DVm 001. 3 or 001 668 100 0 1497006 0 1 3 3. %. % .% m m ª Uncertainty in density is the sum of the uncertainty percentage of mass and volume, but the volume is one-tenth that of the mass, so we just keep the resultant uncertainty at 1%. r 186 1.%kgm-3 (for a percentage of uncertainty) Where 1% of the density is .

dealing with uncertainty is the fragmented nature of the workflow and expertise involved. Lessons learned in simulation science have yet to be applied to data science. Much of the work developing methods for communicating uncertainty is poorly integrated with either uncertainty quantification or the needs and abilities of decision