8 Stochastic Versus Deterministic Approaches

3y ago
26 Views
2 Downloads
267.45 KB
17 Pages
Last View : 11d ago
Last Download : 3m ago
Upload by : Tia Newell
Transcription

8Stochastic versus DeterministicApproachesPhilippe Renard1 , Andres Alcolea2 , and David Ginsbourger31 Centred’Hydrogéologie, Université de Neuchâtel, SwitzerlandSuisse, Basel, Switzerland3 Department of Mathematics and Statistics, University of Bern, Switzerland2 Geo-Energie8.1 IntroductionIn broad sense, modelling refers to the process of generating a simplified representation of a real system. A suitablemodel must be able to explain past observations, integrate present data and predict with reasonable accuracythe response of the system to planned stresses (Carreraet al., 1987). Models have evolved together with scienceand nowadays modelling is an essential and inseparable part of scientific activity. In environmental sciences,models are used to guarantee suitable conditions for sustainable development and are a pillar for the design ofsocial and industrial policies.Model types include analogue models, scale modelsand mathematical models. Analogue models representthe target system by another, more understandable oranalysable system. These models rely on Feynman’sprinciple (Feynman et al., 1989, sec. 12-1): ‘The sameequations have the same solutions.’ For example, theelectric/hydraulic analogy (Figure 8.1a) establishes theparallelism between voltage and water-pressure difference or between electric current and flow rate of water.Scale models are representations of a system that is largeror smaller (most often) than the actual size of the systembeing modelled. Scale models (Figure 8.1b) are often builtto analyse physical processes in the laboratory or to testthe likely performance of a particular design at an earlystage of development without incurring the full expenseof a full-sized prototype. Notwithstanding the use of thesetypes of models in other branches of science and engineering, the most popular models in environmental sciencesare mathematical. A mathematical model describes a system by a set of state variables and a set of equationsthat establish relationships between those variables andthe governing parameters. Mathematical models can beanalytical or numerical. Analytical models often requiremany simplifications to render the equations amenableto solution. Instead, numerical models are more versatileand make use of computers to solve the equations.Mathematical models (either analytical or numerical) can be deterministic or stochastic (from the Greek τ óχoς for ‘aim’ or ‘guess’). A deterministic model isone in which state variables are uniquely determined byparameters in the model and by sets of previous states ofthese variables. Therefore, deterministic models performthe same way for a given set of parameters and initialconditions and their solution is unique. Nevertheless,deterministic models are sometimes unstable – i.e., smallperturbations (often below the detection limits) of theinitial conditions or the parameters governing the problem lead to large variations of the final solution (Lorenz,1963). Thus, despite the fact that the solution is unique,one can obtain solutions that are dramatically differentby perturbing slightly a single governing parameter orthe initial condition at a single point of the domain.Environmental Modelling: Finding Simplicity in Complexity, Second Edition. Edited by John Wainwright and Mark Mulligan. 2013 John Wiley & Sons, Ltd. Published 2013 by John Wiley & Sons, Ltd.133

134Environmental Modelling: Finding Simplicity in Complexityclay lensescoarse sandsandcoarse sandecoarsFinesandclayndcoarse sa yer)la(confinedMedium-coarse sand(semi-confined layer)(a)Coarsersand(b)Figure 8.1 Types of models: (a) Electrical analogue model of the groundwater flow in the Areuse catchment in Switzerland (Devicebuilt by J. Tripet), (b) scale model of an aquifer (Courtesy of F. Cornation).Conversely, stochastic model parameters are described byrandom variables or distributions rather than by a singlevalue. Correspondingly, state variables are also describedby probability distributions. Thus, a stochastic modelyields a manifold of equally likely solutions, which allowthe modeller to evaluate the inherent uncertainty of thenatural system being modelled.Mathematical models (either analytical or numerical,deterministic or stochastic) can also be classified as director inverse. Direct or forward modelling consists of obtaining the value of the state variables given a model structureand values or distributions of the parameters governingthe state equations. Instead, inverse modelling refers tothe process of gathering information about the modeland its parameters from measurements of what is beingmodelled (Carrera et al., 2005). In practice, the governingparameters and the model structure are highly uncertain.Thus, direct modelling is restricted mainly to academicpurposes. On the other hand, inverse modelling corresponds to the quotidian situation, where measurements(either of parameters or state variables or both) are collected at a few selected locations in space and time and amodel structure and parameter distributions are inferredfrom those measurements.Either deterministic or stochastic, direct or inverse,modelling is a crucial step in environmental sciences.Just to mention one example, the disposal of nuclearwastes in deep geological formations requires the estimation of the potential environmental impact in thebiosphere caused by a possible release of hazardousradionuclides. This problem requires detailed studies oftheir migration through the subsurface, including the useof numerical models to predict travel times and trajectories. A deterministic model assumes a certain geometry ofthe geological bodies, fractures, and so forth, and a deterministic (unique) spatial distribution of the parametersgoverning the model equations – for example, hydraulicconductivity and storativity. Thus, a deterministic modelyields a unique prediction of the migration. As such, aradionuclide migrates (with probability one) to the biosphere following a ‘single deterministic’ trajectory andafter a ‘single deterministic’ travel time. Unfortunately,it is impossible to get ‘the perfect’ characterization ofgeology, hydraulic conductivity, and so forth, becausethey are scarcely measured and therefore, our knowledge is inherently uncertain. Even being omnipotentand gathering the required information at every pointin space and time, the model would still be uncertaindue to the presence of measurement errors. Stochastic models acknowledge model uncertainties, including(1) conceptual uncertainties, such as lack of knowledgeabout the dominant processes driving the modelled phenomenon; (2) measurement uncertainties due to thelimited accuracy of instruments; and (3) uncertaintiesdue to the scarcity or the lack of measurements in spaceand time. For instance, one can simulate the migration ofthe radionuclide using many different geological scenarios accounting for, presence or absence of fractures forexample. These simulations are a set of different predictions of the migration under different conditions, fromwhich the modeller or the policy-maker can evaluateprobabilities of occurrence of a given event (such as theprobability that the radionuclide reaches the biosphere inless than 10 000 years). These events are characterized by

Stochastic versus Deterministic Approachesprobability distributions from which statistical momentscan be evaluated such as the minimum travel time (i.e.the maximum time for human beings to react to themigration).Despite the aforementioned advantages, the use ofstochastic models has not been excluded from debate.Stochastic models are often surrounded with an auraof esoterism and, in the end, they are often ignored bymost decision-makers, who prefer a single (deterministic)solution (Carrera and Medina, 1999; Renard, 2007). Onemight be tempted to give up and accept that stochasticprocesses are not amenable to the quantitative and qualitative assessment of modelling. However, it is preciselythe large uncertainty associated with natural sciencesthat makes stochastic models necessary. The goal of thischapter is to propose a discussion of the strengths andweaknesses of deterministic and stochastic models anddescribe their applicability in environmental sciences.The chapter starts by outlining some background concepts and philosophical issues behind deterministic andstochastic views of nature. We then present a summary ofthe most widespread methods. The differences betweendeterministic and stochastic modelling are illustrated bymeans of a real-world application in Oman. The chapterends with a discussion and some recommendations aboutthe use of models in environmental sciences.The motion of groundwater is then described by theconservation principle, whose application leads to thevery well known groundwater-flow equation. It statesthat the mass (or the volume if the fluid is assumeduncompressible) of water that enters an elementary volume of porous medium per unit time must be equal to themass (or volume) of water that leaves that volume plusthe mass (or volume) stored in the elementary volume.In terms of water volume and assuming constant density,the groundwater flow equation can be expressed as: q Ss h r(x) tThe laws of motion expounded by Newton (1687) statethat the future of a system of bodies can be determineduniquely, given the initial position and velocity of eachbody and all acting forces. This radically deterministicapproach has been applied extensively to environmentalproblems. For example, the flux of fluids (often groundwater) through a porous medium is usually described byDarcy’s law (1856), which is analogous to Ohm’s law inelectricity or Fourier’s law in thermal energy. As withmost physical laws, it was first deduced from observations and later authenticated with a very large numberof experiments. In groundwater hydrology, Darcy’s lawstates that the flux of water q [L T 1 ] through a unit surface [L2 ] is proportional to the gradient of hydraulic headsh (a potential, if water density is constant, that dependson water height and water pressure) and to a physicalparameter k [L T 1 ], termed hydraulic conductivity, thatdepends on the type of fluid and porous medium:q k h(8.1)(8.2)where t [T] represents time, Ss [L 1 ] is storativity, q[T 1 ] represents the divergence of fluid flux (i.e.,difference between incoming and outgoing volume ofwater), and r [T 1 ] is a sink/source term that may beused to model, for example, the recharge to the aquiferafter rainfall. Note that all these parameters are, indeed,heterogeneous in reality. Thus, they vary from one location in space to another. K and Ss can also vary in timeif the aquifer changes due to changes in porosity causedby, e.g. clogging or precipitation processes. Yet, these areoften considered as constant in time. Instead, rechargeis a parameter that clearly depends on time. Finally, thegroundwater velocity is:v q/φ8.2 A philosophical perspective135(8.3)where φ[ ] is the effective porosity of the aquifer (theratio of the volume of interconnected pores to the totalvolume of the aquifer).As one can see, this velocity can be obtained unequivocally from precise values (or spatial distributions ifheterogeneity is accounted for) of the physical parameters k, Ss and φ, initial and boundary conditions andsink/source terms (see also Chapter 5). Solving Equations(8.1) to (8.3) twice with equal ingredients leads to twoidentical solutions, without any room for randomness.This approach is in line with the arguments of the German mathematician and philosopher Leibniz, who quotedthe Greek philosopher Parmenides of Elea (fifth centuryBCE), and stated the Principle of Sufficient Reason (Kabitz and Schepers, 2006): ‘everything that is, has a sufficientreason for being and being as it is, and not otherwise.’In plain words, the same conditions lead to the sameconsequences. This strong defence of determinism waslater on softened by the same Leibniz (Rescher, 1991).As pointed out by Look (2008): ‘most of the time thesereasons cannot be known to us.’ This sentence plays acrucial role in the remainder of this section.

136Environmental Modelling: Finding Simplicity in ComplexityMore than a century later, the French mathematicianand physicist Laplace deeply influenced philosophy ofscience with his thoughts about determinism, as detailed(somewhat ironically) in his treatise of probability theory(Laplace, 1820):We ought to regard the present state of the universeas the effect of its antecedent state and as the cause ofthe state that is to follow. An intelligence knowing allthe forces acting in nature at a given instant, as well asthe momentary positions of all things in the universe,would be able to comprehend in one single formula themotions of the largest bodies as well as the lightest atomsin the world, provided that its intellect were sufficientlypowerful to subject all data to analysis; to it nothingwould be uncertain, the future as well as the past wouldbe present to its eyes.Reinterpreting the idea by Laplace, stochastic methodscan hence be seen as a complement to deterministic modelling in the case where ‘some’ parameters areunknown – epistemic uncertainty as opposed to aleatoryor natural uncertainty (Agarwal, 2008). Following thedevelopment of statistical mechanics by Boltzman at theend of the nineteenth century, the rise of Planck andBohr’s quantum physics (Bohr, 1961) has given a newlegitimacy to randomness in the natural sciences duringthe twentieth century, illustrated in the first place byHeisenberg’s famous uncertainty principle (Reichenbach,1944). Beyond epistemic uncertainty, it becomes sensibleto assume that there exists an irreducible randomnessin the behaviour of matter. To that Einstein replies that‘God does not play dice with the universe’ (Broglie, 1953).To be clear, there is no room for uncertainty. We prefernot to enter into this debate here and do not distinguishwhat is unpredictable from what is unknown but couldbe predicted with more information. Coming back tothe groundwater-flow example, it is now clear that evenwith the finest mathematical and physical descriptionof the aquifer and the best computing facilities, modellers cannot predict the groundwater flow exactly unlessthe perfect knowledge of the aquifer and its physicalparameters is available (which is, indeed, never the casein practice). Some field (or laboratory) measurementsof the governing parameters are usually available andsome expert knowledge is always inherited from priorstudies. Thus, modellers can still use equation solvers,despite some parameters are unfortunately not knownwith accuracy. These parameters have to be guessed orestimated. Plugging these estimated parameters in yields aunique solution. Yet, this solution may display a dramaticdeparture from reality if the parameter estimates arenot accurate.Probability theory helps to alleviate epistemic uncertainty. Instead of a single (approximated) solution, theprobabilistic approach provides a manifold of equallyprobable solutions reflecting the many possible values (ordistributions) of the unknown parameters. Of course, allbut at most one of the drawn values or distributions arewrong, as is also almost surely the aforementioned deterministic estimate. Yet, the manifold of plausible solutionsis not aimed at perfectly reflecting reality. Instead, it is thediversity of solutions that constitutes a richer compositeinformation – a probability law over the set of plausible solutions. Statistical moments can then be extractedfrom that probabilistic law, such as the mean value (theexpectation), the most frequent value (the mode), thequantification of the uncertainty associated with thisexpectation (the variance) or, in a general sense, a fullprobability density distribution. Hence, a probabilisticmodel aims at capturing both the average response ofthe system and the variability due to uncertainties of anykind. Producing a reliable picture of this law requiressome information and a suitable probabilistic representation of the underlying unknown parameters. Theseare respectively problems of stochastic modelling and ofstatistical inference: Stochastic modelling assumes that the unknown parameters have been generated by some random mechanism,and strive to mathematically characterize or partiallydescribe this mechanism – see de Marsily (1994). Thelatter can be achieved for instance by assuming someparametric multivariate statistical distribution for theset of unknown parameters (in broad sense, of all inputvariables including, for example, boundary and initialconditions defining the mathematical model).Statistical inference aims at estimating the parametersof a stochastic model on the basis of observed data.This phase, which is deeply interwoven with the chosen stochastic model and the available measurements,has inspired numerous research works of reference(Fisher, 1990) and is still a controversial issue nowadays(Berger, 1985) (see also Chapter 7).In Earth Sciences, Matheron (1989) pointed out thedifficulty of building a suitable model and making appropriate statistical inferences based only on some observations taken from a unique realization of the phenomenon(assumed to be generated at random). Indeed, how couldone come back to Bernouilli’s law of ‘heads or tails’

Stochastic versus Deterministic Approachesby observing the result of one single coin flipping? Thesame question arises when estimating the statistical lawdescribing the spatial variability of a parameter such as,for example, the ore concentration in a gold mine. There isa strong dependence between neighbouring observations,so the inference of a reasonable stochastic model of theore concentration requires a sufficiently diverse samplecovering the different scales at stake. The guarantee (ifany) for a successful statistical inference from a uniquerealization of a spatial process leads to the difficult butfundamental assumption of ergodicity (Matheron, 1969).In the same line of argument, Matheron introduced thenotion of operatory reconstruction for spatial methods:in order to reach objective conclusions, the randomnessattributed to a stochastic model should potentially bereconstructed from the unique observable reality.8.3 Tools and methodsIn the following, we will distinguish between (1) statisticalmodels, based on statistical concepts only, (2) deterministic models, yielding a ‘single best solution’ and (3) stochasticmodels, yielding a manifold of equally likely solutions.However, the reader should bear in mind that this classification is not unique but just aimed at clarifying concepts.For instance, both deterministic and stochastic modelsmake use of statistical concepts. Stochastic models areanother counterexample breaking the classification. Theyare often formulated by a stochastic partial differentialequation. Yet, they can also make use of a deterministic equation and solve it a number of times usingdifferent parameters or initial conditions drawn from aprior statistical model (i.e. a probability-density functiondesigned from available observations). This section isaimed at describing the strengths and weaknesses of thesemodel types.8.3.1 Statistical modelsWhen a large set of field observations or measurementsis available, the first step is to figure out their statisticaldistribution, which allows us to quantify the degree ofvariability of the variable under study and to investigatewhether it can be summarized by a simple statistical distribution. In this perspective, the variable of interest, X,is modelled as a random variable. For example, X canbe the lifetime of a radionuclide. It is well known thatnot all radionuclides of the same family will decay in thesame manner and exactly at the same time. Indeed, this137phenomenon presents some variability. Furthermore, thefact that a radionuclide gets older does not make it moreamenable to undergo decay. This phenomenon is the socalled absence of memory. Yet, despite the unpredictablenature of decay, it is still possible to define a mean lifet

to solution. Instead, numerical models are more versatile and make use of computers to solve the equations. Mathematical models (either analytical or numeri-cal) can be deterministic or stochastic (from the Greek τ o χoς for ‘aim’ or ‘guess’). A deterministic model is one in which state variables are uniquely determined by

Related Documents:

(e.g. bu er stocks, schedule slack). This approach has been criticized for its use of a deterministic approximation of a stochastic problem, which is the major motivation for stochastic programming. This dissertation recasts this debate by identifying both deterministic and stochastic approaches as policies for solving a stochastic base model,

On the Stochastic/Deterministic Numerical Solution of Composite Deterministic Elliptic PDE Problems* George Sarailidis1 and Manolis Vavalis2 Abstract—We consider stochastic numerical solvers for deter-ministic elliptic Partial Differential Equation (PDE) problems. We concentrate on those that are characterized by their multi-

hybrid stochastic–deterministic approach to solve the CME directly. Starting point is a partitioning of the molecular species into discrete and continuous species that induces a partitioning of the reactions into discrete–stochastic and continuous–deterministic. The approach is based on a WKB approximation

Deterministic Finite Automata plays a vital role in lexical analysis phase of compiler design, Control Flow graph in software testing, Machine learning [16], etc. Finite state machine or finite automata is classified into two. These are Deterministic Finite Automata (DFA) and non-deterministic Finite Automata(NFA).

Jul 09, 2010 · Stochastic Calculus of Heston’s Stochastic–Volatility Model Floyd B. Hanson Abstract—The Heston (1993) stochastic–volatility model is a square–root diffusion model for the stochastic–variance. It gives rise to a singular diffusion for the distribution according to Fell

are times when the fast stochastic lines either cross above 80 or below 20, while the slow stochastic lines do not. By slowing the lines, the slow stochastic generates fewer trading signals. INTERPRETATION You can see in the figures that the stochastic oscillator fluctuates between zero and 100. A stochastic value of 50 indicates that the closing

totically valid approximations to deterministic and stochastic rational expectations models near the deterministic steady state. Contrary to conventional wisdom, the higher-order terms are conceptually no more difficult to compute than the conven-tional deterministic linear approximations. We display the solvability conditions for

American Revolution in 1788, when he and his contemporaries were still riding the wave of patriotism emanating from their fresh victory over the British Empire. These histories, marked by American prominence on a global scale, were written into the early 20th century as American patriotism was reinforced by further victory in the War of 1812 and by western expansion. By the latter point, they .