STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS

3y ago
59 Views
5 Downloads
536.22 KB
29 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Wade Mabry
Transcription

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIALEQUATIONSSHUHONG LIUAbstract. This paper introduces stochastic calculus and stochastic differential equations. We start with basic stochastic processes such as martingale andBrownian motion. We then formally define the Itô integral and establish Itô’sformula, the fundamental theorem of stochastic calculus. Finally, we provethe Existence and Uniqueness Theorem of stochastic differential equations andpresent the techniques to solve linear stochastic differential equations.Contents1. Introduction2. Stochastic Processes2.1. Simple Random Walk on Z2.2. Martingale2.3. Brownian Motion3. Stochastic Calculus and Itô’s Formula3.1. Construction of Stochastic Integral3.2. Itô’s Formula4. Stochastic Differential Equations4.1. Definition and Examples4.2. Existence and Uniqueness4.3. Linear Stochastic Differential Equations5. Appendix5.1. Conditional Expectation5.2. Borel-Cantelli 8282929291. IntroductionOften times, we understand the change of a system better than the system itself.For example, consider the ordinary differential equation (ODE)!ẋ(t) f (t, x(t))(1.1)x(0) x0where x0 R is a fixed point and f : [0, ) R R is a smooth function. Thegoal is to find the trajectory x(t) satisfying the initial value problem. However,Date: August 2019.1

2SHUHONG LIUin many applications, the experimentally measured trajectory does not behave asdeterministic as predicted. In some cases, the supposedly smooth trajectory x(t) isnot even differentiable in t. For example, the trajectory may look likex(t)Therefore, we would like to include some random noise in the system to explain thedisturbance. The stochastic representation of the system (1.1) is!ẋ(t) f (t, x(t)) g(t, x(t))ξ(t)(1.2)x(0) x0where ξ(t) is the white noise. The system (1.2) is called a stochastic differentialequation (SDE).This approach, however, leaves us with some problems to solve: define the white noice ξ(t) rigorously;define the solution concept of an SDE;establish conditions on f , g, and x0 upon which an SDE has solutions;discuss the uniqueness of the solution.As it turns out, the white noise ξ(t) is related to Brownian motion. More precisely, ξ(t) is the infinitesimal increment along the Brownian path. Therefore, weoften write the SDE (1.2) as!dx(t) f (t, x(t))dt g(t, x(t))dWt(1.3)x(0) x0where Wt denotes the standard Brownian motion. We introduce Brownian motionin Section 2, along with two other important stochastic processes, simple randomwalk and martingale. In particular, martingale and Brownian motion play a hugerole in studying stochastic calculus and stochastic differential equations.In Section 3, we construct the Itô integral. In other words, we will define whatit means to integrate against a Brownian motion. We will then work out a simpleexample to show the difference between the Itô integral and the Riemann integral. Of course, it is impractical to do every calculation by definition. For theRiemann calculus, we have the Fundamental Theorem of Calculus, which gives usthe relationship between a differentiable function and its derivative. For stochasticcalculus, we have a similar relationship given by Itô’s formula. In the latter halfof Section 3, we prove Itô’s formula and show, through a few examples, how Itô’sformula simplifies the calculation of stochastic integral.In Section 4, we finally discuss stochastic differential equations. Similar to ODE,we will prove the Existence and Uniqueness Theorem for SDE. Knowing there existsa unique solution, we then present the solution methods for linear SDEs and givethe general formulas.

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS3Once we solve these problems, we have a powerful mathematical tool to studysystems that evolve stochastically. SDEs are heavily used in many different areassuch as quantitative finance, partial differential equations, and statistical physics.Since each one of these applications is complicated enough for at least anotherpaper, we will not dive into the details. Instead, we give a brief description foreach topic at the end of Section 4 and point to introductory materials for interestedreaders.2. Stochastic ProcessesIn this section, we introduce three basic stochastic processes. We start withsimple random walk, the most fundamental stochastic process, to give a sense ofthe interesting consequences of randomness. We then move on to martingale, themodel for “fair games”. Martingale has properties, namely the Optional SamplingTheorem and the Martingale Inequality, that are crucial for the study of otherstochastic processes as well as stochastic calculus. Finally, we introduce Brownianmotion, which is a continuous-time martingale and a scaling limit (in a certainsense) of simple random walk at the same time.Many results about these stochastic processes are important in studying the Itôintegral and stochastic differential equation. However, since these are well-knownmathematical objects, most of the proofs will be omitted.2.1. Simple Random Walk on Z.Definition 2.1. A random walk on the integers Z with step function F andinitial state x Z is a sequence of random variables,Sn x n"ξii 1where ξ1 , . . . , ξn are i.i.d. random variables with common distribution F .In particular, a simple random walk has Rademacher- 12 increments, i.e.,12Therefore, a simple random walk is determined by a sequence of fair coin tosses:for each Head, jump to the right; for each Tail, jump to the left.P(ξi 1) P(ξi 1) One (of many) discrete random process modeled by simple random walk is theevolution of the wealth of a gambler whose investment jumps by either 1 withequal probability in each period. Naturally, we want to study the following problems:Problem 2.2. (Gambler’s Ruin) Suppose the gambler starts with x dollars. Whatis the probability that his wealth grows to A dollars before he goes broke? Moreprecisely, define(2.3)T : min{n : Sn 0 or A}.What is P (ST A)?xRemark 2.4. Before we start, we notice that P(T ) 1: the game will end aslong as there are A consecutive Heads, but if the gambler tosses a fair coin forever,the probability that he does not see any A consecutive Heads is 0.

4SHUHONG LIUTo solve the problem, we define u(x) : Px (ST A). Clearly u(A) 1 andu(0) 0. For 0 x A, since after the first jump it is like a new simple randomwalk starting at either x 1, we have a difference equation11u(x) u(x 1) u(x 1)22Let d(x) : u(x) u(x 1). Then from above, d(x) d(x 1) d for all x. Sinceu(x) u(0) x"d(i) xdi 1the boundary condition u(A) 1 gives d 1/A, so u(x) x/A.Proposition 2.5. Px (ST A) x/A.Difference equations are heavily used in the study of combinatorics and discretestochastic processes such as Markov Chains. Of course, most difference equationsare much more complicated and interested readers can refer to [1], Section 0.3for solution methods for more general difference equations. In particular, for lineardifference equations with constant term, we need to formulate them as matrix equations, which can then be solved using matrix multiplication. The second problemof the gambler’s game is a simple example of this technique.Problem 2.6. (Expected Duration) How long, on average, will the game last?More precisely, what is ET ?Similar to the first problem, we define v(x) : Ex T . Clearly v(0) v(A) 0.For 0 x A, since it takes one step to jump to x 1, we have11v(x) 1 v(x 1) v(x 1)22Let d(x) : v(x) v(x 1). Then# # # # # x # d(x 1)1 1 d(x)d(x)1 1d(1) 20 1 2 20 1 2# x # 1 11 xwhere can be shown by induction or eigenvalue decomposition.0 10 1Hence d(x) d(1) 2(x 1) v(1) 2(x 1) as v(0) 0. Sincev(x) x"i 1d(i) xv(1) 2x 1"i 1i xv(1) x(x 1)the boundary condition v(A) 0 gives v(1) A 1, so v(x) x(A x).Proposition 2.7. Ex T x(A x).Remark 2.8. We make a small generalization. Let T : min{n : Sn A or B}where A, B N . ThenB(2.9)P0 (ST A) and E0 T ABA BIntuitively, this generalization corresponds to the game where two gamblers withinitial wealth A and B dollars, respectively, consecutively bet 1 dollar on the outcome of fair coin tosses until one of them goes broke. We will derive the sameresults later using martingale theory.

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS5In discrete stochastic processes, there are many random times similar to (2.3).They are non-anticipating, i.e., at any time n, we can determine whether the criterion for such a random time is met or not solely by the “history” up to time n.Strictly speaking, we give the following definitions.Definition 2.10. A discrete filtration of a set Ω is a collection {Fn } of σ-algebrasof subsets of Ω such that Fn Fn 1 for all n N.In particular, for a discrete stochastic process {Xn }, the natural filtration{Fn } is such that each Fn is the σ-algebra generated by X1 , . . . , Xn . We caninterpret Fn as all the information contained in X1 , . . . , Xn .Definition 2.11. An integer-valued random variable τ is a stopping time relativeto a filtration {Fn } if for each n N, the event {τ n} is Fn -measurable.Examples 2.12. First-visit times are usually stopping times. For example, forevery i N, τi : min{n : Sn i} is a stopping time. In contrary, last-visit timesare usually not stopping times.Stopping times are important for the study of simple random walk because ofthe following property.Proposition 2.13. (Strong Markov Property) If τ is a stopping time for a randomwalk {Sn }, then the post-τ sequence {Sτ n } is also a random walk, with the samestep function, starting at Sτ , and independent of Fτ .Proposition 2.5, combined with the Strong Markov Property, renders interestingconsequences. For a simple random walk {Sn } starting at x, the probability thatit reaches A before 0 is x/A, so the probability that it reaches 0 is at least 1 x/A.Since this is true for every A, setting A gives(2.14)Px {Sn 0 eventually} 1Notice that (2.14) is transition invariant, since for any i N, {Sn i} is a simplerandom walk starting at x i. (2.14) is also reflection invariant, since changing xto x just reverses the roles of Head and Tail. Therefore, for any i Z,(2.15)Px {Sn i eventually} 1Now, consider the stopping times τi in Examples 2.12. The Strong Markov Propertysuggests the post-τi sequence is a new simple random walk independent of Fτi .Reversing the roles of x and i in (2.15), we know that with probability 1, {Sn } willreturn to x after it visits i. Applying the Strong Markov Property again, we havea new simple random walk starting at x, which will eventually visit i and return tox and so on. Inductively, we can conclude that:Theorem 2.16. With probability one, simple random walk visits every state i Zinfinitely often.Given this property, we say simple random walk is recurrent.2.2. Martingale.A martingale is the model for “fair games” in which the expected future payoff,conditioned on the current payoff, is the same as the current payoff. Althoughconditional expectation is easy to understand intuitively, the formal definition needsmeasure theory. Therefore, the definition and properties of conditional expectationare moved to the appendix.

6SHUHONG LIUFrom now on, we will be working in the probability space (Ω, F, P) with discrete filtration {Fn } or continuous filtration {Ft }. Continuous filtration is definedsimilarly.Definition 2.17. A continuous filtration of a set Ω is a collection {Ft } of σalgebras of subsets of Ω such that Fs Ft for all s t.When we construct the model for fair games, a minimum requirement is thenon-anticipating property.Definition 2.18. A sequence {Xt } of random variables is an adaptive processrelative to {Ft }, if the random variable Xt is Ft -measurable for each t.On top of being adaptive, we also need fair games to have expected future payoffsame as the current payoff. To illustrate the power of martingale theory, we startwith the discrete case, so that we can apply the theory to simple random walk.Definition 2.19. A discrete-time adapted process {Xn } of integrable random variables, that is, E Xn for all n N, is a martingale relative to {Fn } ifE(Xn 1 Fn ) Xn a.s. for all n N.Examples 2.20. In all examples, let {Fn } be the natural filtration.(1) Let {Xn } be a sequence of i.i.d. random variables with EXn 0. Then thesequence of partial sumsn"(2.21)Sn Xii 1is a martingale.(2) Let {Xn } be a sequence of i.i.d. random variables with EXn 0 andVar(Xn ) σ 2 . Let Sn be the same as (2.21). Then the sequence(2.22)Sn2 nσ 2is a martingale.(3) Let {Xn } be a sequence of i.i.d. random variables with finite moment generating function ϕ(θ) EeθXn . Let Sn be the same as (2.21). Then the sequenceZn eθSnϕ(θ)nis a positive martingale.One of the most important theorems in martingale theory is Doob’s OptionalSampling Theorem. It states that, under certain conditions, the expected payoff atany stopping time is the same as the initial payoff.Theorem 2.23. (Optional Sampling Theorem) Let {Xn } be a martingale relativeto {Fn }. Let τ be a stopping time and let τ n denote min{τ, n}. Then Xτ n is amartingale. In particular,EXτ EX0if any of the following three conditions holds a.s.: τ is bounded. Xτ n is bounded. E Xτ n 2 is bounded.

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONSProof. See [2], Chapter 1, Section 3.7 Using Optional Sampling Theorem, we can easily solve many problems relatedto stopping times. For example, we revisit the Gambler’s Ruin Problem 2.2 andthe Expected Duration Problem 2.6.Example 2.24. As shown in (2.21), simple random walk {Sn } is a martingale. Itis easy to check that T as in (2.3) is a stopping time and ST n max{A, B}.Hence by Optional Sampling Theorem,0 ES0 EST AP(ST A) BP(ST B)where P(ST A) P(ST B) 1. HenceBA BFor the second problem, since Rademacher- 12 random variable has variance 1,Sn2 n is a martingale, as shown by (2.22). Since ST2 n T n max{A, B}2 T ,by Optional Sampling Theorem,P(ST A) Hence0 E(S02 0) E(ST2 T ) E(ST2 ) ETET E(ST2 ) A2 P(ST A) B 2 P(ST B) ABWe see the results correspond with (2.9), but the derivation is much simpler thansolving difference equations.Now we can discuss continuous-time martingales.Definition 2.25. A continuous adapted process {Xt } of integrable random variables is a martingale relative to {Ft } if E(Xt Fs ) Xs a.s. for all s t.Another important property of martingale is the Martingale Inequality, whichgives a good estimate on the maximum value attained in a time period. In latersections, the Martingale Inequality will be applied to continuous-time martingalessuch as Brownian motion and some Itô processes.Theorem 2.26. (Martingale Inequality) If {Xt } is a martingale and 1 p ,then for all t,%& %&pppE max Xs E Xt p0 s tp 1Proof. See [4], Chapter 2, Section I and Appendix B. 2.3. Brownian Motion.Definition 2.27. A standard Brownian motion (or Wiener process) is acontinuous-time stochastic process {Wt } satisfying W0 0; for all t, s 0, Wt s Ws has normal N (0, t) distribution; for all s t, the random variable Wt Ws is independent of Wr for allr s; with probability one, the path t Wt is continuous.This definition has many subtle details.

8SHUHONG LIURemark 2.28. It is not a priori clear that such a stochastic process exists, since itis possible that the second and third conditions make the path discontinuous. Itwas first proved by Norbert Wiener that Brownian motion does exist. Interestedreaders can refer to [2], Chapter 2, Section 5 for the proof of existence outliningLevy’s construction of Brownian motion.Remark 2.29. Although we mentioned in Section 1 that the white noise ξ(t) isdWt , Brownian motion is actually nowhere differentiable. The term dWt should beinterpreted as the infinitesimal increment along the Brownian path.Remark 2.30. Brownian motion can be viewed as the scaling limit of simple randomwalk. Let {ξn } be a sequence of i.i.d. random variables with mean 0 and variance1. For each n 1, we define⌊nt⌋1 "Wtn ξin i 1 This is a random step function of size ξi / n at times i/n for i between 1 and nt.Since all ξi s are independent, Wtn has independent increments. By the central limitntheorem, the distribution of Wt s Wsn is approximately N (0, t).nSo far, it seems Wt will converge nicely to a Brownian motion, but in fact, onehas to be very careful when taking this limit. Donsker’s Theorem proves that asn , Wtn converges (in a certain sense) to a standard Brownian motion Wt . Theproof uses binary splitting martingales and Skorokhod representation. Interestedreaders can refer to [3].This relation between Brownian motion and simple random walk is important.First, it helps explain why Brownian motion is so ubiquitous in nature. Many stochastic processes behave like random walks with small but frequent jumps, especially for long time periods. Second, it suggests that many statistics and propertiesof simple random walk will have correspondence in Brownian motion. For example, we know that simple random walk is translation and reflection invariant, sonaturally, we have the following property about Brownian motion.Proposition 2.31. (Symmetry and Scaling Laws) Let {Wt } be a standard Brownian motion. Then each of the following is also a standard Brownian motion:{ Wt }{Wt s Wt }{aW (t/a2 )}{tW (1/t)}Another important property of simple random walk is the Strong Markov Property, which also holds for Brownian motion. We first define stopping times forcontinuous-time stochastic processes.Definition 2.32. A nonnegative random variable τ is a stopping time relative toa filtration {Ft } if for each t 0, the event {τ t} is Ft -measurable.Proposition 2.33. (Strong Markov Property) Let {Wt } be a standard Brownianmotion with filtration {Ft }. Let τ be a stopping time. For all t 0, defineWt Wt τ Wτand let {Ft } be its filtration. Then {Wt } is also a standard Brownian motion; for all t 0, Ft is independent of Fτ .

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS9These two propositions can render useful consequences. We are often interestedin the maximal and minimal values attained by Brownian motion in a time period.Formally, defineM (t) : max{Ws : 0 s t}andm(t) : min{Ws : 0 s t}We are interested in the events {M (t) a} and {m(t) b} for some a and b. Forsimplicity, we only deal with the maximum here. Define the first-passage timeτa : min{t : Wt a}which is a stopping time. Then the events {τa t} and {M (t) a} are identical.Furthermore, since the Brownian path is continuous, the events {M (t) a} and{τa t} are also identical. We claim that if τa t, then Wt is as likely to be abovethe level a as to be below. Proof of the claim uses the Strong Markov Propertyand symmetry of the normal distribution. With details omitted, we arrive at oneof the most important formulas for Brownian motion.Proposition 2.34. (Reflection Principle) P(M (t) a) P(τa t) 2P(Wt a) 2 2Φ(a/ t)where Φ is the cumulative density function for the standard normal distribution.Using the Reflection Principle, we can derive the distribution of τa .Proposition 2.35. For all a, the first passage time τa is finite a.s. and has probability density function2ae a /2tf (t) 2πt3Besides the maximal and minimal values, we are also interested in how driftedaway the Brownian path is. One measu

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS 5 In discrete stochastic processes, there are many random times similar to (2.3). They are non-anticipating, i.e., at any time n, we can determine whether the cri-terion for such a random time is met or not solely by the “history” up to time n.

Related Documents:

Stochastic Calculus for Finance I and II Steven E. Shreve: Stochastic Calculus for Finance I, The Binomial Asset Pricing Model, Springer, New York, 2004. Steven E. Shreve: Stochastic Calculus for Finance II, Continuous-Time Models, Springer, New York, 2004. Jan

Jul 09, 2010 · Stochastic Calculus of Heston’s Stochastic–Volatility Model Floyd B. Hanson Abstract—The Heston (1993) stochastic–volatility model is a square–root diffusion model for the stochastic–variance. It gives rise to a singular diffusion for the distribution according to Fell

Introduction 1.1 Introduction to Backward Stochastic Differential Equa-tions What is Backward Stochastic Differential Equations? The most classical form of backward stochastic differential equation (BSDE) is Y t Z T t f(s;Y s;Z s)ds Z T t Z sdW s (1.1.1) where F FW, the terminal condition is a Rd-valued FW T-measurable random variable .

webwork answers calculus, webwork answers calculus 2, webwork answers calculus 3, webwork solutions calculus 3, webwork answer key calculus In Algebra and Geometry, the WebWork assignment must be finished by 10pm . Students in our Calculus 1-3

5500 AP Calculus AB 9780133311617 Advanced Placement Calculus: Graphical Numerical Algebraic 5511 AP Calculus BC n/a Independent 5495 Calculus 9781133112297 Essential Calculus 5495 Calculus - optional - ebook 9780357700013 Essential Calculus Optional ebook . 9780134296012 Campbell Biology 9th Ed

Stochastic Calculus Notes, Lecture 1 Khaled Oua September 9, 2015 1 The Ito integral with respect to Brownian mo-tion 1.1. Introduction: Stochastic calculus is about systems driven by noise. The Ito calculus is about systems driven by white noise. It is convenient to describe white noise by discribing its

Jo ao Guerra (ISEG) Models in Finance - Lecture 1 3 / 25 4 Stochastic calculus What is stochastic calculus? It is an integral (and di erential) calculus with respect to certain stochastic processes (for example: Brownian motion). It allows to de ne integrals (and "derivatives") of stochastic

Awards – Resource B 3 hours 4 hours 5 hours 5 hours. e new prorae ae 12/12 Awards – Resource C Scenario 1 Scenario 3 Scenario 2 Scenario 4. Sow it, grow it! Mastering the moves) Innovate Stage 2 Brownies Express Myself Network Stage 1 Qtotestine VOICE Fitness Make change Stage 5 Feel good Stage 3 Rangers Take Action Guides Be Well Rainbows Know Myself All my friends . Created Date: 6/13 .