Environmental Problems, Uncertainty, And Mathematical

2y ago
12 Views
2 Downloads
351.15 KB
9 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Isobel Thacker
Transcription

Environmental Problems,Uncertainty, andMathematical ModelingJohn W. Boland, Jerzy A. Filar, and Phil G. HowlettIn this paper we discuss three rather special characteristics shared by manyenvironmental problems. Namely, that (i) the environmental variables in which we aremost interested constitute a stochastic process; (ii) the long-term or limiting behaviorand the short-term or transient behavior are often both important; and (iii) theunderlying probability distributions are likely to be influenced by the environmentalpolicies or remedies that we choose to impose. This third characteristic impliesthe need to understand the impact of technologies and controls that influencethe dynamics of the system. The control theoretic perspective of environmentalengineering problems has, we believe, received less attention than it deserves inthe literature. Consequently, after a brief discussion of the exogenous, controlindependent case we focus on illustrating some special challenges and opportunitiesembedded in the control-dependent situations.Throughout history humans have reliedheavily on adaptation to and exploitation of the natural environment. Anunintended consequence of civilizationand the more recent industrial and technological revolutions has been an ever increasingreliance on industry and technology and a consequent neglect of the natural world as a legitimateplanning tool of social and economic development.There are many exemplars. The establishment andgrowth of water-thirsty, air-conditioned cities suchJohn W. Boland is associate professor in environmentalmathematics at the School of Mathematics and Statistics, University of South Australia. His email address isJohn.Boland@unisa.edu.au.Jerzy A. Filar is Foundation Professor of Mathematicsand Statistics at the School of Mathematics and Statistics, University of South Australia. His email address isJerzy.Filar@unisa.edu.au.Phil G. Howlett is emeritus professor of mathematics,School of Mathematics and Statistics, University of SouthAustralia. His email address is Phil.Howlett@unisa.edu.au.All three authors are affiliated with the Institute for Sustainable Systems and Technologies at the University ofSouth Australia.1286as Dubai and Las Vegas, in hostile desert surroundings, is one such stark reminder of ourunbridled desire to dominate the environment andour inability to read the warning signs.However, the acceleration of a multitude ofadverse impacts of human development processeson the environment, including global climatechange, the loss of biocapacity and biodiversity,the spread of pollution, and the depletion ofnatural resources, has, in recent years, served tomobilize public opinion in many countries to tackleenvironmental problems much more actively. As aresult, industries and regulatory agencies in thesecountries are beginning to show real interest inminimizing undesirable environmental impacts ofhuman activities.A prerequisite for the design of effective adaptation and mitigation strategies will be to understandthe underlying processes and the possible effects ofpolicies and regulatory regimens. Consequently, theforthcoming decades will offer the scientific community unprecedented opportunity to contributeto the development and subsequent refinementof wide-ranging environmental remedies. The majority of these remedies will require evidence tosupport “proof of concept” before they can beadopted. The latter will often be obtained withNotices of the AMSVolume 57, Number 10

the help of quantitative—mathematical—modelsand supporting analysis. Thus there will be ampleopportunity for us mathematicians to contributeto these solutions. Indeed, quantitative modelingof environmental problems will be done with orwithout us; thereby raising the spectre, if we arenot engaged, of improper application of the inputcontrols, incorrect interpretation of the outputdata, and subsequent misguided decision making.A detailed discussion of minimal requirementsthat environmental problems should possess inorder to make mathematical modeling meaningfulis beyond the scope of this short contribution.However, in Filar 2002 [17] certain principles areidentified that capture what an applied mathematician might call common sense. Violation ofthese principles should sound a warning thatmathematical modeling may not be appropriate inthese situations. The main point made in [17] isthat whenever we develop a mathematical modelof a phenomenon or a situation that is not itself amathematical entity a certain, minimal, amount of“domain knowledge” is required. Although the lattermay appear obvious, it is clear that in modelingcertain environmental phenomena, for instance,the response of the oceans to the doubling ortripling of atmospheric CO2 concentrations, theissue of what constitutes domain knowledge is achallenging question in its own right.Nonetheless, in this paper we shall assume thatadequate domain knowledge is available and focusinstead on three rather special characteristicsshared by many environmental problems. Wewill consider variables (e.g., levels of persistentcontamination in a lake)(1) which constitute a (possibly multidimensional) stochastic process {Xt },t 0;(2) for which both the short-term or transientbehavior and the long-term or asymptoticbehavior are equally important; and(3) where the underlying distributions of therandom variables, Xt , are likely to becontinually, but only partially, influencedby the policies or remedies we are designing(e.g., regulations or emission filters).The first of the above characteristics impliesthat the need to understand and manage riskis usually an essential part of the problem. Thesecond implies that environmental remediationpolicies and technological remedies that are costlyand unpopular in the short term require persuasive advocacy before they will be accepted andadopted. The third characteristic implies the needto understand the impact of the control functionsu(t) that influence the dynamics of the stochasticprocesses. History shows us that, often inadvertently, economic development policies may actas controls that influence the trajectory of keyNovember 2010state variables in some important ecosystem. Theneed for advocacy and the importance of controlhave, we believe, received less attention than theydeserve in the literature. Consequently, after avery brief discussion of the exogenous, controlindependent case, we focus on illustrating somespecial challenges and opportunities embedded inthe control-dependent situations.Extreme and Rare Events in the Exogenous,Uncontrolled, CaseIn this section we shall make a simplifying—andincreasingly less acceptable—assumption that themajority of our most feared natural disasters suchas hurricanes, floods, droughts, crop failures, andbush fires are independent of human activities.In this case, these disasters certainly constitute“extreme events” in the common statistical sensemeaning of the phrase.Consequently, it is prudent to examine what thenow classical “extremal value theory” has to offerin our context of modeling the probability of suchevents occurring. The origins of this theory—thathas evolved out of the twin subjects of statisticsand stochastic processes—date back to the seminalwork of Fisher and Tippett [21] in the first half ofthe last century. By now, this challenging subjecthas grown enormously, with researchers followinga number of fruitful lines of investigation. For acomprehensive modern text we refer the reader toEmbrechts et al. [14].However, before proceeding, we observe that asubstantial portion of the theory of extremal eventswas motivated by financial considerations such asthe “risk of ruin”. Thus, to the extent that thesetechniques and concepts depend on accumulationof losses, they may not correspond very well tothe types of problems that are most relevant inour context. For instance—and without in anyway advocating the underlying connotations—thephrase “a miss is as good as a mile” capturessome of the above distinction. Thus, a severe floodat a level that does not breach existing leveespresumably has little or no effect on the probabilitythat future floods will breach these defenses.However, an investor who only just avoided ruinwhen the market had its last downturn has probablysuffered such losses that his or her likelihood offailing to avoid ruin in the next downturn is severelyreduced.Due to the above considerations we will notdiscuss those aspects of the theory of extremalevents that deal with sums of random variablesexceeding certain thresholds1 and will focus instead1It should be noted that such random sums could still beof interest in our context if, for instance, we were trying to analyze the accumulated degradation of certainnatural protective barriers, such as Louisiana’s “barrierislands” (e.g., see [13]). Thus the issue of partial sums ofNotices of the AMS1287

on the aspects that deal with the properties ofthe so-called “extremal statistics”. We shall nowintroduce some of the notation needed to makethe discussion a little more precise.Consider a sequence {Xn }, n N, of independentidentically distributed random variables (iid rv’s,for short), all of which are distributed as agiven random variable X that has a cumulativedistribution function F . The random variables ofinterest in the classical extremal value theory arethe induced sequence of sample maximaMn : max(X1 , X2 , . . . , Xn ) for n 2.Note that the analysis of the sample minima, defined analogously, is not any differentbecause of the identity min(X1 , X2 , . . . , Xn ) max( X1 , X2 , . . . , Xn ) and hence we shallrestrict our discussion only to the sequence {Mn },n N. It is now clear that the distribution functionof Mn is simply(1)P(Mn x) P(X1 x, X2 x, . . . , Xn x) F(x)n .Since in this theory we are primarily interestedin “failures” corresponding to the sample maximaexceeding certain thresholds, we shall be particularly interested in the “tail” of the distributionof Mn , namely in P(Mn x) 1 F(x)n . For ourunderlying distribution F(x) the tail is definedsimply by F̄(x) : 1 F(x).Of course, for small n, equation (1) providesa means of calculating the tail probabilities forthe distribution of Mn , but for large n a directcomputation could be very cumbersome. Thus, amajor thrust was made to derive asymptotic resultsthat are in the spirit of the celebrated “CentralLimit Theorem” of statistics, which states thatΣn Xn nµ Z,σ nas n where X1 , . . . , Xn are independent andidentically distributed random variables and Z is astandard normal distribution.This naturally led to the question of whether it ispossible to find constants cn 0 and dn such thatfor some nondegenerate probability distribution HMn dn H,cnin distribution, as n tends to infinity. Clearly, if (2)holds, then the equationZ unMn dn(3) P( x) P(Mn un ) dH(x),cn (2)sequences of random variables exceeding certain thresholds and the amounts by which they exceed these thresholds is very relevant to the assessment of environmentalrisk. The latter has received considerable attention in thefinancial risk literature, in which these concepts are knownas “Value-at-Risk” and “Conditional-Value-at-Risk”. We donot discuss these concepts here but refer the interestedreader to Rockafellar and Uryasev [33].1288where un cn x dn , provides a basis for calculatingan approximation of the tail probabilities of interest.The problem contained in equations (2) and (3)is actually more challenging than the analogousproblem for random sums. The validity of theseequations seems to require delicate conditionson the tail F̄ of the distribution F to ensure theexistence of a nontrivial limit of P(Mn un ) as ntends to infinity. In particular, the tail F̄(un ) needsto decay at an appropriate rate as un tends toinfinity with n.One of the fundamental results is the so-calledPoisson approximation that states, that for anygiven nonnegative number τ and a sequence {un }of real numbers, the following equivalence holds:(4)nF̄(un ) τ P(Mn un ) e τas n tends to infinity. An elegant special case wherethe above limit can be directly computed is thecase when F(x) 1 e x , that is, the exponentialrandom variable with parameter 1. In this case adirect calculation shows that(5)nP(Mn ln n x) [P(X x ln n)] [1 n 1 e x ]n exp{ e x }as n tends to infinity. Note that Λ(x) : exp{ e x }is the well-known Gumbel distribution. Indeed, theremarkable conclusion of the famous Fisher-Tippetttheorem is that if H is a nondegenerate distributionsuch that (2) holds, then H must belong to oneof only three families of distributions: Frechet,Gumbel, or Weibull. Thus these three well-knowndistributions provide a basis for many of theapproximations of the probabilities of interest.Of course, while mathematically very convenient,the independent, identically distributed distribution assumption on the random variables of theprocess {Xt }; t 0 is not realistic in many environmental applications. After all, for the majorityof natural phenomena of interest, such as levels ofpollution in the air or water or salinity in the soil,previous values of relevant indicator variables definitely influence current values of these variables.Consequently, perhaps, the mildest and yet stillpowerful way of relaxing the above assumption isto replace it by the stationary Markov transitionassumption, which, in discrete time, states that,for every possible pair of values x and x0 , theprobability that Xt 1 x0 given that Xt x isindependent of time and any previous states andactions. That is, there exist stationary transitionprobabilities:(6)p(x0 x) : P{Xt 1 x0 Xt x}for all t 0, 1, 2, . . . .The above—seemingly still quite restrictive—assumption facilitates a lot of very useful modeling,especially when it is reasonable to discretize therange of the random variables Xt by finitelyNotices of the AMSVolume 57, Number 10

many values {x1 , x2 , . . . , xN }. In such a case, anN N probability transition matrix P of a Markovchain whose (i, j)th entry is p(xj xi ) contains allthe required information about the probabilitydistributions of all the random variables Xt for allt 0, 1, 2, . . . This simple approximation makes itpossible to apply a wide range of computationallyeffective, matrix analytic methods to understandmany important characteristics of the underlyingMarkov process. The reader is referred to [27]and [30] for both the classical and more modernperspectives on this interesting subject.We conclude this section with a simplisticbut still illustrative, example. Consider a process{Xt }; t 0, where Xt denotes the stock of a certainspecies of fish in year t. Assume that, without anyharvesting, the natural marine ecosystem cyclesensure that the stock is in one of only threestates: abundant (x1 ), average (x2 ), and low (x3 ).Suppose also, for instance, that the correspondingMarkov chain is adequately described by the 3 3probability transition matrix: 0.70 0.25 0.05P 0.10 0.80 0.10 .0.00 0.30 0.70The entries in the above matrix are completelyfictitious, but they may reflect the anticipatedcyclic pattern of the population of that particularspecies of fish. Furthermore, it is well known thatthe successive powers of P n yield probabilities ofn-step transitions from state to state. It is easy toverify that, for this particular transition matrix, thelimit P : limn P n exists and has identical rows,each coinciding with the row vector π of so-calledstationary distribution probabilities satisfying thefixed-point equation π π P .Indeed, in this particular instance, the latter is approximately given, given by π [0.19, 0.58, 0.23].Thus it is possible to conclude that, if the probability transition matrix P continues to describeaccurately the stochastic process of interest, then,in the long-run average sense, the population ofthis particular species of fish will be abundantapproximately 19% of the time, average 58% of thetime, and low 23% of the time.Of course, in the above example, all probabilitydistributions were exogenous in that they wereassumed to remain unchanged over time andindependent of human activities. It is clear, however,that the essence of most environmental problemslies in the violations of such assumptions. Forinstance, in the fish population example, fishingregulations and market prices for fish are likelyto impact the intensity of harvesting and will,therefore, alter these distributions. This naturallyleads to the class of models and some of the issuesdiscussed in the next section.November 2010Stochastic Sequential Decision ModelsTo address some of the issues alluded to inthe preceding section, it is natural to move fromconsideration of Markov chains to the more generalframework of Markov decision processes (MDP’s,for short). The latter are stochastic, sequentialprocesses in which a “decision maker” has somecontrol over the distributions of a future streamof random benefits or costs frequently referred toas positive or negative “rewards”.More precisely, we shall now consider a processΓ that is observed at discrete time points t 0, 1, 2, 3, . . . that will sometimes be called stages.At each time point t, the state of the process willbe denoted by Xt . We shall assume that Xt is arandom variable that can take on values from thefinite set X {1, 2, . . . , N}, which from now on willbe called the state space. The phrase “the processis in state x at time t” will be synonymous with theevent {Xt x}.We shall assume that the process is controlledby a controller or a decision maker who chooses anaction a A(x) {1, 2, . . . , m(x)} at time t if theprocess is in state x at that time. We may regardthe action chosen as a realization of a randomvariable At denoting the controller’s choice at time t.Furthermore, we shall assume that the choice ofa A(x) in state x results in an immediate rewardor output r (x, a) and in a probabilistic transitionto a new state x0 X.Now the previous stationary transition probabilities assumption of (6) is extended by theassumption that, for every x, x0 X and a A(x),(7)p(x0 x, a) : P{Xt 1 x0 Xt x, At a}for all t 0, 1, 2, . . . .Suppose that the decision maker wishes toinfluence a stream of expected values of theserewards, denoted by{Exf (Rt )} t 0 ,where x is the initial “state”, f is the control, Rt is therandom reward or benefit at stage t, and E denotesthe mathematical expectation operator. We assumethat specifying x and f uniquely determines theprobability distribution of Rt for every time periodt in the future.The decision maker might then wish to choosef so as to maximize either the discounted performance criterion Xvd (x, f ) : βt Exf (Rt ),t 0where the parameter β [0, 1) is called the discount factor, or the long-run average performancecriterion, defined byva (x, f ) : lim infNotices of the AMST T1 XExf (Rt ).T 1 t 01289

Note that the discounted criterion has a naturalaccounting interpretation as the so-called “presentvalue” of the stream of rewards {Exf (Rt )} t 0 .This type of criterion is so widely accepted byeconomists that, often, the question is not reallywhether it ought to be a criterion of choicebut only of what value the discount factor βshould take. However, it can be argued that mostdedicated environmentalists would favor the longrun average criterion as the more likely to guaranteesustainability.Even in this relatively simple setting someconceptual complications quickly arise. What constitutes a “control” in this dynamic, stochasticsetting?The standard approach is to consider a historyof the process at time t, namely,combine these two performance criteria. Surely themost benign way of attempting to do so would beto choose a weight parameter λ [0, 1] and to tryto find a control f so as to maximise(10) vλ (x, f ) : λ(1 β)vd (x, f ) (1 λ)va (x, f ),thereby creating the so-called weighted rewardcriterion and the corresponding weighted rewardMDP. Clearly it follows that(11) sup vλ (x, f ) λ(1 β)vd (x) (1 λ)va (x),fwhere the right-hand side constitutes the utopianbound for this new criterion.Unfortunately, but, perhaps, not surprisingly, in[28] it has been shown that:(1) The following inequalities hold:sup vλ (x, f ) sup vλ (x, f ) sup vλ (x, f ),ht (x0 , a0 , x1 , a1 , . . . , at 1 , xt )and to define a decision rule at time t as the mapft : ht ft (ht , a) [0, 1]. Next, a control is definedas a sequence of decision rules, one at each time,denoted by f : (f0 , f1 , f2 , . . . , ft , . . .). Let FB be thespace of all controls. If for every t the decision ruleft depends only on the current state at that stage,then the control f is called Markov or memory-less.Let FM be the space of all Markov controls. If f FMand t, x, a the probability of choosing any actiona, namely, ft (x, a), is independent of t, then f iscalled a stationary control. Let f FS (the set ofstationary controls). Finally, if x, a the probabilityf (x, a) {0, 1}, then f is called a deterministiccontrol. Let FD be the set of deterministic controls;then clearlyFDsup vλ (x, f ) sup vλ (x, f ).FM(9)(12)f1290FMfε (f1 , f2 , . . . , fτ(ε) , f ? , f ? , f ? , . . .),{z} {z} be “greedy”switch to optimalfor a whilelong-run averagecontrolfwhere va (x) will be called the long-run averagevalue of the corresponding long-run MDP.It is well known (e.g., see [32]) that both of theseproblems have simple solutions in deterministiccontrols. Namely, there exists f 0 FD optimal xin the discounted problem, as well as f ? FDoptimal x in the long-term average problem.Furthermore, there are “good” algorithms forcomputing f 0 , vd (x), f ? , va (x).In some sense the above means that, separately,with regard to the preferred performance criterionof either the economist or the environmentalist,the corresponding optimal control problem iswell posed and well solved. However, it is worthconsidering what happens if we wish to somehowvλ (x, fε ) sup vλ (x, f ) ε.(5) In particular, there exists a “switching time”τ(ε):vd (x) : max vd (x, f ) vd (f 0 ),va (x) : max va (x, f ) va (f ? ),FB(4) Given any ε 0 there exists an ε-optimalcontrol fε such thatNext, we consider two “optimal control” problems: (i) find a (simple) control f 0 such thatwhere vd (x) will be called the discounted value ofthe corresponding discounted MDP, and (ii) find a(simple) control f ? such thatFMwith possible in both places.(2) In general, an optimal control for vλ (x, f )need not exist. However, for each λ when thediscount factor β is sufficiently near 1, thereexists an optimal deterministic control.(3) Nonetheless, it is reassuring thatFD FS FM FB .(8)FSOf course, τ(ε) depends critically on theparameter λ and the underlying data.In a sense, properties 1–5 above capture theessence of the conflict between the “industrialist”and the “environmentalist”, a conflict that iscaptured in—but not reconciled by—the dilemmaof the “right choice” of the switching time τ(ε) in thestructure of nearly optimal controls given in item5 above. In the context of the previous motivatingexample of a fishery, this dilemma lies in the dualdesires of wishing to profit from harvesting thespecies and ensuring that its population remainsat sustainable levels in perpetuity.It should be mentioned that the precedingdiscussion and results are conceptually similarto and consistent with results published in 1996by Chichilnisky [12]. The latter are presented ina more general axiomatic framework but do notsupply the switching structure of nearly optimalcontrols mentioned in item 5 above.Notices of the AMSVolume 57, Number 10

We conclude this section by pointing out thatthere is now a substantial literature dealing withweighted reward criteria (including multiple discount factors) in Markov decision processes andstochastic games (e.g., see [19] and [15]). For a survey of that interesting topic the reader is referredto [16].Environmental Engineering: The Interplay Between Mathematical Modeling,Technology, and Stochastic ControlThe main point of this section is to emphasize thatenvironmental engineering is invariably aimed atcontrolling the evolution of systems that containinherent uncertainty. There is much that couldbe said about the mathematical background tostochastic control and many different specialistareas—state space models, Markov decision processes, dynamic programming, control of linearsystems, Kalman filtering, system identification,and adaptive control. An excellent introductoryreference is the book by Kumar and Varaiya [29].See also a more modern look at a variety ofapplications in the edited volume by Abed [1].Our purpose here is not to survey the existingtheory but rather to illustrate the way in whichenvironmental engineering immediately challengesus to come to terms with managing uncertainty. Ofall the twentieth-century advances in mathematicalcontrol, perhaps the most insightful and elegant isthe optimality principle of dynamic programming.However, despite its theoretical elegance, a directsearch implementation is still likely to be plaguedby the curse of dimensionality—especially in astochastic situation. We choose an elementarymodel of a solar-powered desalination plant withwhich to illustrate the principle.Illustration: A Model for a Solar-PoweredDesalination UnitWe now move to a more specific but in somesense still generic application. Suppose the energycollected by a solar panel can be used immediatelyto power a desalination unit or stored in a batteryfor later use. As the level of power supplied tothe unit increases, the volume rate of fresh waterproduced also increases, but the process becomesless efficient. This is a classic case of the “lawof diminishing returns”. Thus, when energy r issupplied to the desalination unit at constant powerfor a single day, we assume the volume of freshwater produced is given by a performance functionx : [0, ) , [0, ), which is increasing and strictlyconcave with x(0) 0. For convenience supposex0 : (0, ) , (0, ) is continuous with x0 (r ) 0as r and further that x00 (r ) : (0, ) , ( , 0)is continuous. The solar energy collected onday t will be modeled as a Markovian randomvariable St S [0, M] with well-defined transitionNovember 2010probabilities P{St [0, v] St 1 u} for each(u, v) S S. Define Fu : S , [0, 1] for each u Sby settingFu (v) P{St [0, v] St 1 u}for each v S. For each allowable configuration ofthe state variables we wish to find an energy usagepolicy that maximizes the expected volume offresh water produced by the desalination unit fromday t n 1 to day t N. The state variables arethe index t of the day, the amount of energy b btin the battery at the beginning of day t, and theamount of solar energy ut 1 u collected on dayt 1. The control variable is the amount of energyr rt we decide to use on day t. We use the Bellmanprinciple of dynamic programming [6, 7, 8] to finda stochastic control policy that maximizes theexpected total volume of fresh water produced. Weshow in one special case that a long-term (infinitehorizon) optimal strategy uses the same amount ofenergy each day. Since a long-term strategy mustbe sustainable, it is intuitively obvious in this casethat the energy used each day must be equal to theaverage solar energy collected.This model was first formulated to find strategiesthat maximized the distance traveled by solarpowered racing cars in a given time period. Theinitial studies [22, 24] treated the problem as adeterministic control problem where the solarradiation was known in advance. These studiesevolved from closely related work on optimaltrain control. For a recent reference see [26]. Thesolar car problem was later reformulated as astochastic control problem [25] in which the dailysolar radiation evolved according to a knownMarkov process. The Markovian nature of theprocess underlying the evolution of solar radiationis well documented. There have been models fordiscrete space [2, 4, 23] and for continuous space[3, 9]. More recently, Boland [10] has described thesimilarity of the Markov structure in solar radiationpersistence on two time scales, daily and hourly.The cited treatises overcome the seasonality ofsolar radiation time series in various different ways.In [2], separate Markov transition matrices for eachmonth are constructed to forecast levels of solarradiation on a daily time scale. On the other hand,[4, 10, 31] make extensive use of spectral analysisto identify significant embedded cyclical behaviorin the time series and to model that part as adeterministic component using Fourier series. Inthe present work any seasonal component has beenignored. This is a reasonable assumption if thetime period is relatively short. In general we wouldneed to assume that the seasonal component hasbeen identified and removed, in which case theanalysis would focus on the remaining stochasticcomponent.Notices of the AMS1291

The Mathematical ModelDefine value functions WN [u] : [0, ) , [0, ) foreach u S on day N, the final day of the givenperiod, as the expected volume of fresh waterproduced on day N given that the energy collectedon day N 1 was SN 1 u and given that all energywill be used by the end of day N. If b is the energyin the battery at the beginning of the final day, thenZ(13)WN [u](b) x(b vN )dFu (vN ).SIn general we wish to define value functionsWt [u] : [0, ) , [0, ) for each u S and eacht n, n 1, . . . , N given that the solar energycollected on day t 1 was ut 1 u. We begin bydefining auxiliary value functions wt [u] : [0, ) [0, ) , [0, ) for each u S on day t given thatthe solar energy collected on day t 1 was ut 1 uand the energy in the battery at the beginning ofday t is b. Thus we define(14)Zwt [u](b, r ) x(r ) Wt 1 [vt ](b vt r )dFu (vt ).and the optimal energy consumption ϕN 1 [u](b)satisfies the equationx0 (ϕN 1 [u](b))Z Z x0 ([b

Phil G. Howlett is emeritus professor of mathematics, School of Mathematics and Statistics, University of South Australia. His email address is Phil.Howlett@unisa. edu.au. All three authors are affiliated with the Institute for Sus-tainable Systems and Technologies at the University of South Australia

Related Documents:

1.1 Measurement Uncertainty 2 1.2 Test Uncertainty Ratio (TUR) 3 1.3 Test Uncertainty 4 1.4 Objective of this research 5 CHAPTER 2: MEASUREMENT UNCERTAINTY 7 2.1 Uncertainty Contributors 9 2.2 Definitions 13 2.3 Task Specific Uncertainty 19 CHAPTER 3: TERMS AND DEFINITIONS 21 3.1 Definition of terms 22 CHAPTER 4: CURRENT US AND ISO STANDARDS 33

fractional uncertainty or, when appropriate, the percent uncertainty. Example 2. In the example above the fractional uncertainty is 12 0.036 3.6% 330 Vml Vml (0.13) Reducing random uncertainty by repeated observation By taking a large number of individual measurements, we can use statistics to reduce the random uncertainty of a quantity.

73.2 cm if you are using a ruler that measures mm? 0.00007 Step 1 : Find Absolute Uncertainty ½ * 1mm 0.5 mm absolute uncertainty Step 2 convert uncertainty to same units as measurement (cm): x 0.05 cm Step 3: Calculate Relative Uncertainty Absolute Uncertainty Measurement Relative Uncertainty 1

Uncertainty in volume: DVm 001. 3 or 001 668 100 0 1497006 0 1 3 3. %. % .% m m ª Uncertainty in density is the sum of the uncertainty percentage of mass and volume, but the volume is one-tenth that of the mass, so we just keep the resultant uncertainty at 1%. r 186 1.%kgm-3 (for a percentage of uncertainty) Where 1% of the density is .

Nature of modeling uncertainty in the Earth Sciences Needs to be application tailored Several sources of uncertainty Measurements and their interpretation Geological setting Spatial variation Response uncertainty Uncertainty assessment is subjective Dealing with a high-dimensional / large problem Mathematical challenges

Dealing with Uncertainty: A Survey of Theories and Practices Yiping Li, Jianwen Chen, and Ling Feng,Member, IEEE Abstract—Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world.

economic policy uncertainty index, while the trade policy uncertainty index is used in Section 5. Concluding remarks follow. 2. Literature on policy uncertainty and trade There is a large body of theoretical and empirical work that studies the impact of uncertainty, and of policy

Measurement Uncertainty Approach 10 ISOGUM -ISO Guide to the Expression of Uncertainty Determine the uncertainty components based on the model: Isolate each component in the model that is anticipated to cause variation in the measured results. Calculate the sensitivity coefficients: Sensitivity coefficients are multipliers that are used to convert uncertainty