SC505 STOCHASTIC PROCESSES Class Notes

3y ago
28 Views
3 Downloads
2.11 MB
264 Pages
Last View : 12d ago
Last Download : 2m ago
Upload by : Maxton Kershaw
Transcription

SC505STOCHASTIC PROCESSESClass Notesc Prof. D. Castañon & Prof. W. Clem KarlDept. of Electrical and Computer EngineeringBoston UniversityCollege of Engineering8 St. Mary’s StreetBoston, MA 02215Fall 2004

2

Contents1 Introduction to Probability1.1 Axioms of Probability . . . . . . . . . . . . . . . . . .1.2 Conditional Probability and Independence of Events .1.3 Random Variables . . . . . . . . . . . . . . . . . . . .1.4 Characterization of Random Variables . . . . . . . . .1.5 Important Random Variables . . . . . . . . . . . . . .1.5.1 Discrete-valued random variables . . . . . . . .1.5.2 Continuous-valued random variables . . . . . .1.6 Pairs of Random Variables . . . . . . . . . . . . . . . .1.7 Conditional Probabilities, Densities, and Expectations1.8 Random Vectors . . . . . . . . . . . . . . . . . . . . .1.9 Properties of the Covariance Matrix . . . . . . . . . .1.10 Gaussian Random Vectors . . . . . . . . . . . . . . . .1.11 Inequalities for Random Variables . . . . . . . . . . .1.11.1 Markov inequality . . . . . . . . . . . . . . . .1.11.2 Chebyshev inequality . . . . . . . . . . . . . .1.11.3 Chernoff Inequality . . . . . . . . . . . . . . . .1.11.4 Jensen’s Inequality . . . . . . . . . . . . . . . .1.11.5 Moment Inequalities . . . . . . . . . . . . . . .111113131419192124272831333535363637372 Sequences of Random Variables2.1 Convergence Concepts for Random Sequences . . . . . . . . . .2.2 The Central Limit Theorem and the Law of Large Numbers . .2.3 Advanced Topics in Convergence . . . . . . . . . . . . . . . . .2.4 Martingale Sequences . . . . . . . . . . . . . . . . . . . . . . .2.5 Extensions of the Law of Large Numbers and the Central Limit2.6 Spaces of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Theorem. . . . . .393943454850523 Stochastic Processes and their Characterization3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .3.2 Complete Characterization of Stochastic Processes . . . .3.3 First and Second-Order Moments of Stochastic Processes3.4 Special Classes of Stochastic Processes . . . . . . . . . . .3.5 Properties of Stochastic Processes . . . . . . . . . . . . .3.6 Examples of Random Processes . . . . . . . . . . . . . . .3.6.1 The Random Walk . . . . . . . . . . . . . . . . . .3.6.2 The Poisson Process . . . . . . . . . . . . . . . . .3.6.3 Digital Modulation: Phase-Shift Keying . . . . . .3.6.4 The Random Telegraph Process . . . . . . . . . . .3.6.5 The Wiener Process and Brownian Motion . . . .3.7 Moment Functions of Vector Processes . . . . . . . . . . .3.8 Moments of Wide-sense Stationary Processes . . . . . . .5555565657596161626566676869.

4CONTENTS3.9Power Spectral Density of Wide-Sense Stationary Processes . . . . . . . . . . . . . . . . . . .4 Mean-Square Calculus for Stochastic Processes4.1 Continuity of Stochastic Processes . . . . . . . . . . . . . . . . .4.2 Mean-Square Differentiation . . . . . . . . . . . . . . . . . . . . .4.3 Mean-Square Integration . . . . . . . . . . . . . . . . . . . . . . .4.4 Integration and Differentiation of Gaussian Stochastic Processes4.5 Generalized Mean-Square Calculus . . . . . . . . . . . . . . . . .4.6 Ergodicity of Stationary Random Processes . . . . . . . . . . . .71.757577798383865 Linear Systems and Stochastic Processes5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5.2 Review of Continuous-time Linear Systems . . . . . . . . . . . . . . . . . .5.3 Review of Discrete-time Linear Systems . . . . . . . . . . . . . . . . . . . .5.4 Extensions to Multivariable Systems . . . . . . . . . . . . . . . . . . . . . .5.5 Second-order Statistics for Vector-Valued Wide-Sense Stationary Processes5.6 Continuous-time Linear Systems with Random Inputs . . . . . . . . . . . .93939396989899.6 Sampling of Stochastic Processes1056.1 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057 Model Identification for Discrete-Time Processes7.1 Autoregressive Models . . . . . . . . . . . . . . . .7.2 Moving Average Models . . . . . . . . . . . . . . .7.3 Autoregressive Moving Average (ARMA) Models .7.4 Dealing with non-zero mean processes . . . . . . .1111111131151168 Detection Theory8.1 Bayesian Binary Hypothesis Testing . . . . . . . . . . . . . . .8.1.1 Bayes Risk Approach and the Likelihood Ratio Test . .8.1.2 Special Cases . . . . . . . . . . . . . . . . . . . . . . . .8.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .8.2 Performance and the Receiver Operating Characteristic . . . .8.2.1 Properties of the ROC . . . . . . . . . . . . . . . . . . .8.2.2 Detection Based on Discrete-Valued Random Variables .8.3 Other Threshold Strategies . . . . . . . . . . . . . . . . . . . .8.3.1 Minimax Hypothesis Testing . . . . . . . . . . . . . . .8.3.2 Neyman-Pearson Hypothesis Testing . . . . . . . . . . .8.4 M-ary Hypothesis Testing . . . . . . . . . . . . . . . . . . . . .8.4.1 Special Cases . . . . . . . . . . . . . . . . . . . . . . . .8.4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . .8.4.3 M -Ary Performance Calculations . . . . . . . . . . . . .8.5 Gaussian Examples . . . . . . . . . . . . . . . . . . . . . . . . 149149150154156157.9 Series Expansions and Detection of Stochastic Processes9.1 Deterministic Functions . . . . . . . . . . . . . . . . . . . .9.2 Series Expansion of Stochastic Processes . . . . . . . . . . .9.3 Detection of Known Signals in Additive White Noise . . . .9.4 Detection of Unknown Signals in White Noise . . . . . . . .9.5 Detection of Known Signals in Colored Noise . . . . . . . .

CONTENTS510 Estimation of Parameters10.1 Introduction . . . . . . . . . . . . . . . . . . . . .10.2 General Bayesian Estimation . . . . . . . . . . .10.2.1 General Bayes Decision Rule . . . . . . .10.2.2 General Bayes Decision Rule Performance10.3 Bayes Least Square Estimation . . . . . . . . . .10.4 Bayes Maximum A Posteriori (MAP) Estimation10.5 Bayes Linear Least Square (LLSE) Estimation .10.6 Nonrandom Parameter Estimation . . . . . . . .10.6.1 Cramer-Rao Bound . . . . . . . . . . . .10.6.2 Maximum-Likelihood Estimation . . . . .10.6.3 Comparison to MAP estimation . . . . .15915916016016116216717418118218518711 LLSE Estimation of Stochastic Processes and Wiener Filtering11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.2 Historical Context . . . . . . . . . . . . . . . . . . . . . . . . . . .11.3 LLSE Problem Solution: The Wiener-Hopf Equation . . . . . . . .11.4 Wiener Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.4.1 Noncausal Wiener Filtering (Wiener Smoothing) . . . . . .11.4.2 Causal Wiener Filtering . . . . . . . . . . . . . . . . . . . .11.4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .18918919019119219319720912 Recursive LLSE: The Kalman Filter12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .12.2 Historical Context . . . . . . . . . . . . . . . . . . .12.3 Recursive Estimation of a Random Vector . . . . . .12.4 The Discrete-Time Kalman Filter . . . . . . . . . . .12.4.1 Initialization . . . . . . . . . . . . . . . . . .12.4.2 Measurement Update Step . . . . . . . . . .12.4.3 Prediction Step . . . . . . . . . . . . . . . . .12.4.4 Summary . . . . . . . . . . . . . . . . . . . .12.4.5 Additional Points . . . . . . . . . . . . . . . .12.4.6 Example . . . . . . . . . . . . . . . . . . . . .12.4.7 Comparison of the Wiener and Kalman Filter.21121121121221521521621621721821822113 Discrete State Markov Processes13.1 Discrete-time, Discrete Valued Markov Processes . .13.2 Continuous-Time, Discrete Valued Markov Processes13.3 Birth-Death Processes . . . . . . . . . . . . . . . . .13.4 Queuing Systems . . . . . . . . . . . . . . . . . . . .13.5 Inhomogeneous Poisson Processes . . . . . . . . . . .13.6 Applications of Poisson Processes . . . . . . . . . . .223223224226228230233.A Useful Transforms235B Partial-Fraction Expansions241B.1 Continuous-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241B.2 Discrete-Time Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242C Summary of Linear AlgebraC.1 Vectors and Matrices . . . . . . . .C.2 Matrix Inverses and DeterminantsC.3 Eigenvalues and Eigenvectors . . .C.4 Similarity Transformation . . . . .245245248250251

6CONTENTSC.5 Positive-Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252C.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253C.7 Vector Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254D The non-zero mean case257

List of .248.2510.110.210.310.410.510.6Interarrival Times τk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Arrival times T (n) and interarrival times τk . . . . . . . . . . . . . . . . . . . . . . . . .The Poisson Counting Process (PCP) N (t) and the relationship between arrival timesand interarrival times τk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .T (n). . . .626363Detection problem components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Illustration of a deterministic decision rule as a division of the observation space into disjointregions, illustrated here for the case of two possibilities. . . . . . . . . . . . . . . . . . . . . . 118General scalar Gaussian case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Scalar Gaussian case with equal variances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Scalar Gaussian case with equal means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Illustration of ROC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Illustration of PD and PF calculation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128Illustration ROC properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Illustration ROC convexity using randomized decision rules. . . . . . . . . . . . . . . . . . . . 131Illustration ROC behavior as we obtain more independent observations. . . . . . . . . . . . . 132Illustration ROC for a discrete valued problem of Example 8.11. . . . . . . . . . . . . . . . . 133Illustration ROC for a discrete valued problem of Example 8.12. . . . . . . . . . . . . . . . . 134Illustration of the performance of a randomized decision rule. . . . . . . . . . . . . . . . . . . 135Illustration of the overall ROC obtained for a discrete valued observation problem using randomized rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Left: Illustration of the expected cost of a decision rule using an arbitrary fixed threshold asa function of the true prior probability P1 . The maximum cost of this decision rule is at theleft endpoint. The lower curve is the corresponding expected cost of the optimal LRT. Right:The expected cost of the minimax decision rule as a function of the true prior probability P 1 . 136Finding the minimax operating point by intersecting (8.85) with the ROC for the optimal LRT.137Likelihoods for a Neyman-Pearson problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138Scaled densities, decision regions and PF for the problem of Example 8.13. . . . . . . . . . . . 138Decision boundaries in the space of the likelihoods for an M -ary problem. . . . . . . . . . . . 140Illustration of the decision rule in the original data space. . . . . . . . . . . . . . . . . . . . . 142Illustration of the decision rule in the likelihood space. . . . . . . . . . . . . . . . . . . . . . . 143Illustration of the ML decision rule in the observation space. . . . . . . . . . . . . . . . . . . 143Illustration of decision rule in the observation space. . . . . . . . . . . . . . . . . . . . . . . . 144Illustration of the calculation of Pr (Decide H0 H1 ) in the observation space. . . . . . . . . . 145Illustration of the calculation of Pr (Decide H1 H1 ) in the observation space. . . . . . . . . . 146Parameter Estimation Problem Components . . . . . . . .Square error cost function . . . . . . . . . . . . . . . . . .BLSE Example . . . . . . . . . . . . . . . . . . . . . . . .Uniform or MAP cost function. . . . . . . . . . . . . . . .Illustration of geometry behind MAP estimate derivation.Illustration of the projection theorem for LLSE. . . . . . .159162163167168180

8LIST OF FIGURES10.7 Interpretation of the ML Estimator: (a) pY X (y x) viewed as a function of y for fixed valuesof x, (b) pY X (y x) viewed as a function of x for fixed y, (c) pY X (y x) viewed as a functionof both x and y. For a given observation y0 , xbM L (y) is the maximum with respect to x forthe given y y0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18611.111.211.311.411.511.611.7Linear Estimator for a Stochastic Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Estimation Types Based on Relative Times of Observation and Estimate. . . . . . . . . . . .Impulse response of noncausal Wiener filter of example. . . . . . . . . . . . . . . . . . . . . .Power spectra of signal and noise for example. . . . . . . . . . . . . . . . . . . . . . . . . . . .Bode-Shannon Whitening Approach to Causal Wiener Filtering. . . . . . . . . . . . . . . . .Wiener Filter for White Noise Observations. . . . . . . . . . . . . . . . . . . . . . . . . . . . .Relationship between time domain and Laplace domain quantities for the Causal Wiener Filterfor White Noise Observations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.8 Pole-zero plot and associated regions of convergence. . . . . . . . . . . . . . . . . . . . . . . .11.9 Function f (t), the pole-zero plot, and the corresponding ROC. . . . . . . . . . . . . . . . . .11.10Function f (t), the pole-zero plot, and the corresponding ROC. . . . . . . . . . . . . . . . . .11.11Function f (t), the pole-zero plot, and the corresponding ROC. . . . . . . . . . . . . . . . . .11.12Function f (t), the pole-zero plot, and the corresponding ROC. . . . . . . . . . . . . . . . . .11.13Plot of f (t) for T 0 and T 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.14Whitening Filter W (s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.15Illustration of pole-zero symmetry properties of SY Y (s). . . . . . . . . . . . . . . . . . . . . .11.16Overall causal Wiener Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.17Summary of Causal Wiener Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.18Summary of Wiener Filter Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0621012.1 Kalman Filtering Example: Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21912.2 Kalman Filtering Example: Covariance and Gain . . . . . . . . . . . . . . . . . . . . . . . . . 22013.1 Diagram for example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

List of Tables1.11.25.1Summary of probability distribution function and probability density relationships. . . . . . .Important random variables. (N/A under the PDF column indicates that there is no simplifiedform.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Common Functions and their z-transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . .982510.1 Comparison of MAP and ML Estimation for a particular example. . . . . . . . . . . . . . . . 18812.1 Comparison of the causal Wiener filter and the Kalman filter. . . . . . . . . . . . . . . . . . . 222A.1A.2A.3A.4A.5A.6A.7Fourier transform and inverse Fourier transform definitions.Discrete-time Fourier series and transform relationships. .Fourier Transform Properties. . . . . . . . . . . . . . . . .Useful Continuous-Time Fourier Transform Pairs . . . . . .Useful Discrete-Time Fourier Transform Pairs . . . . . . .Useful Laplace Transform Pairs . . . . . . . . . . . . . . .Useful Z-Transform Pairs . . . . . . . . . . . . . . . . . . .235236237237238238239

10LIST OF TABLES

Chapter 1Introduction to ProbabilityWhat is probability theory? It is an axiomatic theory which describes and predicts the outcomes of inexact,repeated experiments. Note the emphases in the above definition. The basis of probabilistic analysis is todetermine or estimate the probabilities that certain known events occur, and then to use the axioms ofprobability theory to combine this information to derive probabilities of other events of interest, and topredict the outcomes of certain experiments.For example, consider any card game. The inexact experiment is the shuffling of a deck of cards, withthe outcome being the order in which the cards appear. An estimate of the underlying probabilities wouldbe that all orderings are equally likely; the underlying events would then be assigned a given probability.Based on the underlying probability of the events, you m

3 Stochastic Processes and their Characterization 55 . probability theory to combine this information to derive probabilities of other events of interest, and to predict the outcomes of certain experiments. For example, consider any card game. The inexact experiment is the shu ing of a deck of cards, with

Related Documents:

sion analysis on discrete-time stochastic processes. We now turn our focus to the study of continuous-time stochastic pro-cesses. In most cases, it is di cult to exactly describe the probability dis-tribution for continuous-time stochastic processes. This was also di cult for discrete time stochastic processes, but for them, we described the .

processes 1.Basics of stochastic processes 2.Markov processes and generators 3.Martingale problems 4.Exisence of solutions and forward equations 5.Stochastic integrals for Poisson random measures 6.Weak and strong solutions of stochastic equations 7.Stochastic equations for Markov processes in Rd 8.Convergenc

Jul 09, 2010 · Stochastic Calculus of Heston’s Stochastic–Volatility Model Floyd B. Hanson Abstract—The Heston (1993) stochastic–volatility model is a square–root diffusion model for the stochastic–variance. It gives rise to a singular diffusion for the distribution according to Fell

are times when the fast stochastic lines either cross above 80 or below 20, while the slow stochastic lines do not. By slowing the lines, the slow stochastic generates fewer trading signals. INTERPRETATION You can see in the figures that the stochastic oscillator fluctuates between zero and 100. A stochastic value of 50 indicates that the closing

Stochastic processes and Brownian motion In this chapter we give general de nitions on stochastic processes, Markov processes and continuous time martingales. We then focus on the example of Brownian motion. 1.1 Stochastic processes: general de nitions and properties In the following de nitions, a probability space (;F;P) is given. De nition 1.1.1.

STOCHASTIC CALCULUS AND STOCHASTIC DIFFERENTIAL EQUATIONS 5 In discrete stochastic processes, there are many random times similar to (2.3). They are non-anticipating, i.e., at any time n, we can determine whether the cri-terion for such a random time is met or not solely by the “history” up to time n.

Stochastic Processes & Random Walks 20/38 I Stochastic processes is a family of random variables, usually indexed by a set of numbers (time). A discrete time stochastic process is simply a sequence of random variables, X 0;X 1;:::;X nde ned on the same probability space I One of the simplest stochastic processes (and one of the

C. FINANCIAL ACCOUNTING STANDARDS BOARD In 1973, an independent full-time organization called the Financial Accounting Standards Board (FASB) was established, and it has determined GAAP since then. 1. Statements of Financial Accounting Standards (SFAS) These statements establish GAAP and define the specific methods and procedures for