Probability Theory And Mathematical Statistics Lecture 10 .

2y ago
36 Views
2 Downloads
4.45 MB
32 Pages
Last View : 28d ago
Last Download : 3m ago
Upload by : Konnor Frawley
Transcription

Probability Theory and Mathematical StatisticsLecture 10: Special Probability DistributionsChih-Yuan HungSchool of Economics and ManagementDongguan University of TechnologyMay 8, 2019Joseph HungProbabilityMay 8, 20191 / 31

IntroductionThis and the next chapter focus on the most prominent probabilitydistributions/densitiesWe study their parameterse.g.: mean, µ; variance, σ2Obtain these parameters by either summation/integration directly ormoment generating functionssome distributions are important in itself others are important instudying the technique of solving the parametersJoseph HungProbabilityMay 8, 20192 / 31

Definition (Discrete Uniform Distribution)A random variable X has a discrete uniform distribution and it isreferred to as a discrete uniform distribution random variable if and only ifits probability distribution is given byf (x ) 1for x x1 , x2 , ., xkkwhere xi 6 xj when j 6 j.”Discrete uniform random variable” can be seen as k different valueswith equally likely possibility.A special case: xi i. e.g.: k 6, xi 1, 2, ., 6 and f (i ) the case of rolling a balanced die.Joseph HungProbability16May 8, 2019is3 / 31

Example (Exercise 1 and 2)If X has the discrete uniform distribution f (x ) that1kfor x 1, 2, ., k showk 1222σ k 12 11its mean is µ 2its variance is3its moment-generating function is Mx (t ) e t (1 e kt )k (1 e t )Solution (µ)kµ E (X ) 1 x·kx 11 (1 2 . k )k1 k (1 k )k 1 k22Joseph HungProbabilityMay 8, 20194 / 31

Solution (σ2 )kE (X 2 ) 1 x2 · kx 11 (12 22 . k 2 )k1 1 · k (k 1)(2k 1)k 6 (k 1)(2k 1)k 1 2σ E (X ) (E (X )) 624k 2 6k 2 3k 2 6k 3 122k 1 122Joseph Hung22ProbabilityMay 8, 20195 / 31

Solution (MGF)kMx (t ) E (e xt ) 1 e xt kx 1 1 t e e 2t . e ktk et t(k 1)t 1 e . eke t (e kt 1) k (e t 1)Joseph HungProbabilityMay 8, 20196 / 31

The Bernoulli DistributionAn experiment with only two possible outcome, success and failure, withprobability θ and 1 θ is called a Bernoulli trialThe corresponding random variable is defined as:Definition (The Bernoulli Distribution)A random variable X has a Bernoulli distribution and it is referred to as aBernoulli random variable if and only if its probability distribution is givenbyf (x; θ ) θ x (1 θ )1 x for x 0, 1Note that the only ”parameter” in Bernoulli distribution is θ, theprobability of ”success”Repeating of Bernoulli experiments, the trials, have some importantextensions.Joseph HungProbabilityMay 8, 20197 / 31

Repeating Trials IRepeat n times of same Bernoulli trial, with same probability to success, θ,what is the probability of a given number of success?Definition (The Binomial Distribution)A random variable X has a binomial distribution and it is referred to as abinomial random variable if and only if its probability distribution is givenby n xθ (1 θ )n x for x 1, 2, ., nb (x; n, θ ) xJoseph HungProbabilityMay 8, 20198 / 31

Example (1)Find the probability of getting five heads and seven tails in 12 flips of abalanced coin.Solution (1)x 5, n 12, and θ 12 , the probability is 5 71211 522 find the number in table VI, b 5; 12, 21 0.19. 1b 5; 12,2Joseph Hung ProbabilityMay 8, 20199 / 31

Example (2)Find the probability that 7 of 10 persons will recover from a tropicaldisease if we can assume independence and the probability is 0.80 that anyone of them will recover from the disease.Solution (2)x 7, n 10, and θ 0.80, the probability is b (7; 10, 0.8) 100.87 0.237find the number in table VI, b (7; 10, 0.8) 0.2.Joseph HungProbabilityMay 8, 201910 / 31

For example 1, we can also refers to table IBut for example 2, table I should be used ”reversely”Theorem (1)b (x; n, θ ) b (n x; n, 1 θ ) n xb (x; n, θ ) θ (1 θ )n xxn! (1 θ )n x θ xx ! (n x ) !n! (1 θ )n x θ n (n x )(n x )! [n (n x )]! n (1 θ )n x θ n (n x ) b (n x; n, 1 θ )n xExample 2: b (7; 10, 0.8) b (3; 10, 0.2) 0.2.Joseph HungProbabilityMay 8, 201911 / 31

Theorem (2)The mean and variance of the binomial distribution areµ nθ and σ2 nθ (1 θ )Proof.Joseph HungProbabilityMay 8, 201912 / 31

Proportion of SuccessSuppose X is a binomial random variable with (n, θ ). Consider Y the proportion of success es in n trialsXn,itTheorem (3)If X has a binomial distribution with the parameters n and θ and Y thenθ (1 θ )E (Y ) θ and σY2 nJoseph HungProbabilityMay 8, 2019Xn,13 / 31

Law of Large NumbersRecall Chebyshev’s theorem,P ( X µ kσ) 1 1k2Let kσ c, we are asking the probability lower bound for the random2variable X around the mean with distance of c. Since k 2 σc 2P ( X µ c ) 1 σ2c2If X b (n, θ ), then for any positive constant c, we haveP ( Y θ c ) 1 θ (1 θ )n · c2As n , Y is almost equals to θJoseph HungProbabilityMay 8, 201914 / 31

MGFTheoremThe moment-generating function of the binomial distribution is given by nMx (t ) 1 θ (e t 1)Proof.By definition of MGF and binomial distribution, nxt nMX ( t ) eθ x (1 θ )n xxx 0 n xxt n eθe t (1 θ )n xxx 0 t n θe 1 θ n 1 θ (e t 1)Joseph HungProbabilityMay 8, 201915 / 31

Repeating Trials IIRepeat a Bernoulli experiments, what is the probability of kth success inxth trials.Definition (Negative Binomial Distribution)A random variable X has a negative binomial distribution and it is referredto as a negative binomial random variable if and only if x 1 k b (x; k, θ ) θ (1 θ )x k for x k, k 1, K 2, .k 1The premise of this scenario is, for the first x 1 trials, the number ofsuccesses is exactly k 1.If it is not the case, say k 2 success, then xth trial will not be able togenerate kth success.Joseph HungProbabilityMay 8, 201916 / 31

As a result, it is the case, a binomial random variable k 1 in x 1 trials. x 1 k 1b (k 1; x 1, θ ) θ(1 θ )x kk 1for k 1 0, 1, 2, ., x 1The probability of the xth trial with a success is θ, therefore x 1 k 1 b (x; k, θ ) θθ(1 θ )x kk 1 x 1 k θ (1 θ )x kk 1for x k, k 1, k 2, .Theoremb (x; k, θ ) Joseph Hungk· b (k; x, θ )xProbabilityMay 8, 201917 / 31

ExampleIf the probability is 0.40 that a child exposed to a certain contagiousdisease will catch it, what is the probability that the tenth child exposed tothe disease will be the third to catch it?Solution1b (10; 3, 0.4) (92)0.43 0.67 0.06452310· b (3; 10, 0.4). refer to table I,310· 0.2150 0.0645TheoremThe mean and the variance of the negative binomial distribution are kk 12µ and σ 1θθ θJoseph HungProbabilityMay 8, 201918 / 31

Repeating Trials IIIA special case of negative binomial distribution: k 1.Definition (Geometric Distribution)A random variable X has a geometric distribution and it is referred to asa geometric random variable if and only if its probability distribution isgiven byg (x; θ ) θ (1 θ )x 1 for x 1, 2, 3, .Mean and Variance of geometric distributionµ 1θσ2 1 θθ2All of the random variable above have the same probability tosuccess, θ.Joseph HungProbabilityMay 8, 201919 / 31

Sampling Without ReplacementThe assumption of same probability to success is not suitable for theexperiment of sampling without replacementlet us consider a set of N elements of which M are looked upon assuccesses and the other N M as failuresCompare with binomial distribution, let X as the random variable ofnumber of success in n without replaced trialsNote that x M and n x N MDefinition (The Hypergeometric Distribution)A random variable X has a hypergeometric distribution and it is referredto as a hypergeometric random variable if and only if its probabilitydistribution is given byh (x; n, N, M ) N M(Mx )( n x )(Nn )for x 1, 2, ., min {n, M }Joseph HungProbabilityMay 8, 201920 / 31

Example (6)As part of an air-pollution survey, an inspector decides to examine theexhaust of 6 of a company’s 24 trucks. If 4 of the company’s trucks emitexcessive amounts of pollutants, what is the probability that none of themwill be included in the inspector’s sample?Solutionx 0, n 6, N 24, and M 4.h (0; 6, 24, 4) Joseph Hung(40)(206) 0.28824(6)ProbabilityMay 8, 201921 / 31

TheoremThe mean and the variance of the hypergeometric distribution areµ nMnM (N M )(N n )and σ2 NN 2 (N 1)When N is large and n is relatively small, binomial with θ approximation for hypergeometric distribution.MNis a goodExample (7)x 2, n 5, N 120 and M 80h (2; 5, 120, 80) 2b (2; 5, ) 3Joseph Hung40(802 )( 3 ) 0.164(120)5 2 522 31 0.165233ProbabilityMay 8, 201922 / 31

The Poisson DistributionAs n is very large, the calculation of binomial distribution will be verytedious.A good approximation of this case is called Poisson random variable.Consider the extreme case, n , θ 0, and nθ exists.Denote nθ as λ. n xb (x; n, θ ) θ (1 θ )n xx x nλλ n x1 nnxn (n 1)(n 2).(n x 1) ( λ )xx !nx" # λ λ n/λλ x1 1 nnJoseph HungProbabilityMay 8, 201923 / 31

The Poisson Distributionb (x; n, θ ) n (n 1)(n 2).(n x 1)( λ )xx !nx" # λ λ n/λλ x1 1 nn11(1 n1 )(1 n2 ).(1 x n )( λ )xx!" # λ λ n/λλ x1 1 nnIf n ,b (x; n, θ ) Joseph Hungλx λex!ProbabilityMay 8, 201924 / 31

Definition (Poisson Distribution)A random variable has a Poisson distribution and it is referred to as aPoisson random variable if and only if its probability distribution is given byp (x; λ) λx e λfor x 0, 1, 2, .x!Example (5.8)Use Figure 4 to determine the value of x (from 5 to 15) for which the erroris greatest when we use the Poisson distribution with λ 7.5 toapproximate the binomial distribution with n 150 and θ 0.05.Joseph HungProbabilityMay 8, 201925 / 31

Example (9)If 2 percent of the books bound at a certain bindery have defectivebindings, use the Poisson approximation to the binomial distribution todetermine the probability that 5 of 400 books bound by this bindery willhave defective bindings.Solutionx 5, λ 400(0.2) 8 and e 8 0.00034 into p (·)p (5; 8) 85 · 0.00034 0.0935!Also, can refer to table IIJoseph HungProbabilityMay 8, 201926 / 31

Example (10)Records show that the probability is 0.00005 that a car will have a flat tirewhile crossing a certain bridge. Use the Poisson distribution toapproximate the binomial probabilities that, among 10,000 cars crossingthis bridge,1exactly two will have a flat tire;2at most two will have a flat tire.Example (11)Use Figure 5 to rework the preceding example.Joseph HungProbabilityMay 8, 201927 / 31

Mean and Variance of Poisson DistributionSince Poisson is derived from binomial distribution by getting the limits,mean and variance of Poisson has same properties.TheoremThe mean and the variance of the Poisson distribution are given byµ λ and σ2 λλ is defined as nθ, it is straightforward µ λAs θ 0, σ2 nθ (1 θ ) λ(1 0) λ.We can also verify the theorem by calculating the sum and MGF.Joseph HungProbabilityMay 8, 201928 / 31

Proof.x is a Poisson random variable µ E (X ) x·x 0 λx e λx!λ x 1 e λλx 1 (x 1 ) ! λλy e λ λy!y 0 Mx (t ) E (e tx ) x 0Joseph HungProbabilitye tx ·λx e λx!May 8, 201929 / 31

More Application of Poisson DistributionThe Poisson distribution can serve as a model for the number of successesthat occur during a given time interval or in a specified regionExample (12)The average number of trucks arriving on any one day at a truck depot ina certain city is known to be 12.What is the probability that on a givenday fewer than 9 trucks will arrive at this depot?Example (13)A certain kind of sheet metal has, on the average, five defects per 10square feet. If we assume a Poisson distribution, what is the probabilitythat a 15-square-foot sheet of the metal will have at least six defects?Joseph HungProbabilityMay 8, 201930 / 31

HomeworkWork with your partner (in group)hand in the homework to the editor group on duty before 17:00,Sunday.Group editor on duty shall organize the final answers and send the fileof final answer to 1307455914@qq.com before next TuesdayHW: Chapter 5-Joseph HungProbabilityMay 8, 201931 / 31

Questions?Joseph HungProbabilityMay 8, 201932 / 31

The corresponding random variable is de ned as: De nition (The Bernoulli Distribution) A random variable X has a Bernoulli distribution and it is referred to as a Bernoulli random variable if and only if its probability distribution is given by f (x; q) qx(1 q)1 x for x 0,1 Note that

Related Documents:

Joint Probability P(A\B) or P(A;B) { Probability of Aand B. Marginal (Unconditional) Probability P( A) { Probability of . Conditional Probability P (Aj B) A;B) P ) { Probability of A, given that Boccurred. Conditional Probability is Probability P(AjB) is a probability function for any xed B. Any

Springer Texts in Statistics Alfred: Elements of Statistics for the Life and Social Sciences Berger: An Introduction to Probability and Stochastic Processes Bilodeau and Brenner:Theory of Multivariate Statistics Blom: Probability and Statistics: Theory and Applications Brockwell and Davis:Introduction to Times Series and Forecasting, Second Edition Chow and Teicher:Probability Theory .

SOLUTION MANUAL KEYING YE AND SHARON MYERS for PROBABILITY & STATISTICS FOR ENGINEERS & SCIENTISTS EIGHTH EDITION WALPOLE, MYERS, MYERS, YE. Contents 1 Introduction to Statistics and Data Analysis 1 2 Probability 11 3 Random Variables and Probability Distributions 29 4 Mathematical Expectation 45 5 Some Discrete Probability

Dr Jonathan Jordan MAS113 Introduction to Probability and Statistics. Introduction Set theory and probability Measure Motivation - the need for set theory and measures If you have studied probability at GCSE or A-level, you may have seen a de nition of probability like this:

Probability theory is the most directly relevant mathematical background, and it is assumed that the reader has a working knowledge of measure-theory-based probability theory. Chapter 1 covers this theory at a fairly rapid pace. Theory of Statistics c 2000-2020 James E. Gentle

mathematics to model randomness. Probability is the mathematical study of chance. Knowing the chance, or probability, of an event happening can be very useful. For example, insurance companies estimate the probability of an automobile accident happening. This . 890 CHAPTER 14 Probability and Statistics

introduction to probability and mathematical statistics and it is intended for students already having some elementary mathematical background. It is intended for a one-year senior level undergraduate and beginning graduate level course in probability theo

Pros and cons Option A: - 80% probability of cure - 2% probability of serious adverse event . Option B: - 90% probability of cure - 5% probability of serious adverse event . Option C: - 98% probability of cure - 1% probability of treatment-related death - 1% probability of minor adverse event . 5