MATH 341: TAKEAWAYSSTEVEN J. MILLERA BSTRACT. Below we summarize some items to take away from the class (as well as previousclasses!). In particular, what are one time tricks and methods, and what are general techniques tosolve a variety of problems, as well as what have we used from various classes. Comments andadditions welcome!1. C ALCULUS IANDII (M ATH 103AND104)We used a variety of results and techniques from 103 and 104:(1) Standard integration theory: For us, the most important technique is integration by parts;one of many places we used this was in computing the moments of the Gaussian. Integrationby parts is a very powerful technique, and is frequently used. While most of the time it isclear how to choose the functions π’ and ππ£, sometimes we need to be a bit clever. For exam ple, consider the second moment of the standard normal: (2π) 1/2 π₯2 exp( π₯2 /2)ππ₯.The natural choices are to take π’ π₯2 or π’ exp( π₯2 /2), but neither of these work as theylead to choices for ππ£ that do not have a closed form integral. What we need to do is splitthe two βnaturalβ functions up, and let π’ π₯ and ππ£ exp( π₯2 /2)π₯ππ₯. The reason is thatwhile there is no closed form expression for the anti-derivative of the standard normal, oncewe have π₯ππ₯ instead of ππ₯ then we can obtain nice integrals. One final remark on integratingby parts: it is a key ingredient in the βBring it overβ method (which will be discussed below).(2) Definition of the derivative: Recallπ (π₯ β) π (π₯).β 0βIn upper level classes, the definition of the derivative is particularly useful when there is asplit in the definition of a function. For example, consider{exp( 1/π₯2 ) if π₯ 0π (π₯) 0if π₯ 0.π β² (π₯) limThis function has all derivatives zero at π₯ 0, but is non-zero for π₯ 0. Thus the Taylorseries does not converge in a neighborhood of positive length containing the origin. Thisfunction shows how different real analysis is from complex analysis. Explicitly, here wehave an infinitely differentiable function which is not equal to its Taylor series in a neighborhood of π₯ 0; if a complex function is differentiable once it is infinitely differentiableand it equals its derivative in a neighborhood of that point.Date: December 20, 2009.1
2STEVEN J. MILLER(3) Ratio, root and comparison tests: These are used to determineif a series or integral con πverges. We frequently used the geometric series formula π₯ 1/(1 π₯) if π₯ 1.π 0(4) Taylor series: Taylor expansions are very useful, allowing us to replace complicated functions (locally) by simpler ones. The moment generating function of a random variable isa Taylor series whose coefficients are the moments of the distribution. Another instancewhere we used this is in proving the Central Limit Theorem. The moment generating function of a sum of independent random variables is the product of the moment generatingfunctions. To study a product, we summify it (weβll discuss this technique in much greaterdetail below). Thus we need to expand log (ππ (π‘)π ) π log ππ (π‘). As ππ (0) 1, wefor small π‘ we just need to understand the expansion of log(1 π’).Taylorβs Theorem: If π is differentiable at least π 1 times on [π, π], then for all π₯ [π, π], (π)π (π₯) ππ 0 π π!(π) (π₯ π)π plus an error that is at most maxπ π π₯ π (π 1) (π) π₯ π π 1 .(5) LβHopitalβs Rule: This is one of the most useful ways to compare growth rates of differentfunctions. It works for ratios of differentiable functions such that either both tend to zeroor both tend to . We used this in class to see that, as π₯ , (log π₯)π΄ π₯π΅ ππ₯ forany π΄, π΅ 0. (Recall π (π₯) π(π₯) means there is some πΆ such that for all π₯ sufficientlylarge, π (π₯) πΆπ(π₯).) We also used LβHopital to take the derivatives of the troublesomefunction β(π₯) exp( 1/π₯2 ) for π₯ 0 and 0 otherwise (this function is the key to why realanalysis is so much harder than complex analysis).2. M ULTIVARIABLE C ALCULUS (M ATH 105/106)(1) Fubini Theorem (or Fubini-Tonelli): Frequently we want to / need to justify interchangingtwo integrals (or an integral and a sum). Doing such interchanges is one of the most frequenttricks in mathematics; whenever you see a double sum, a double integral, or a sum and anintegral you should consider this. While we cannot always interchange orders, we can if thedouble sum (or double integral) of the absolute value of the summand (or the integrand) isfinite. For example, 1[ ]1ππ¦ 0 π₯π¦π₯ππ₯ ππ¦ 1[ ]1 π₯π¦ ππ₯ 0π₯ 0 1 π ππ₯ π₯π¦ π₯ 01π₯ππ¦ ππ₯π¦ 0 01()1 π π₯ ππ₯ 2 π π₯ .(2.1)π₯ 0Note how much easier it is when we integrate with respect to π¦ first β we bypass having touse Integration by Parts. For completeness, we state:
MATH 341: TAKEAWAYS3Fubiniβs Theorem: Assume π is continuous and π π π (π₯, π¦) ππ₯ππ¦ .πThen π [ ]π π[ π]ππ (π₯, π¦)ππ¦ ππ₯ π(2.2)ππ (π₯, π¦)ππ₯ ππ¦.πSimilar statements hold if we instead haveπ1 ππ1π1 π (π₯π , π¦)ππ¦,π (π₯π , π¦π ).π π0π(2.3)π(2.4)π π0 π π0(2) Whenever you have a theorem, you should always explore what happens if you removea condition. Frequently (though not always) the claim no longer holds; sometimes theclaim is still true but the proof is harder. Rarely, but it can happen, removing a conditioncauses you to look at a problem in a new light, and find a simpler proof. We apply thisprinciple to Fubiniβs theorem; specifically, we remove the finiteness condition and constructa counter-example. For simplicity, we give a sequence πππ such that π ( π ππ,π ) π ( π ππ,π ). Forπ, π 0 let 1 if π πππ,π 1 if π π 1(2.5) 0 otherwise.We can show that the two different orders of summation yield different answers; if we sumover the columns first we get 0 for each column, and then doing the sum of the columnsums gives 0; however, if we do the row sums first, than all the row sums vanish but the first(which is 1), and hence the sum of the row sums is 1, not 0. The reason for this differenceis that the sum of the absolute value of the terms diverges.(3) Interchanging derivatives and sums: It is frequently useful to interchange a derivativeand an infinite sum. The first place this is met is in proving the derivative of ππ₯ is ππ₯ ; usingthe series expansion for ππ₯ , it is trivial to find the derivative if we can differentiate term byterm and then add.Interchanging differentiation and integration: Let π (π₯, π‘) and π (π₯, π‘)/ π₯ be continuouson a rectangle [π₯0 , π₯1 ] [π‘0 , π‘1 ] with [π, π] [π‘0 , π‘1 ]. Then π π πππ (π₯, π‘)ππ‘ (π₯, π‘)ππ‘.(2.6)ππ₯ π‘ ππ‘ π π₯Frequently one wants to interchange differentiation and summation; this leads to themethod of differentiating identities, which is extremely useful in computing moments of
4STEVEN J. MILLERprobability distributions. For example, consider the identityπ ( ) π π π ππ(π π) π π .ππ 0(2.7)πApplying the operator π ππto both sides we find( )π π π π π ππ π .ππ 0π 1π π(π π)Setting π 1 π yields the mean of a binomial random variable:( )π π πππ ππ (1 π)π π .ππ 0(2.8)(2.9)It is very important that initially π and π are distinct, free variables, and only at the end dowe set π 1 π.(4) Dangers when interchanging: One has to be very careful in interchanging operations.Consider, for example, the family of probability densities ππ (π₯), where ππ is a triangulardensity on [1/π, 3/π] with midpoint (i.e., maximum value) π. While each ππ is continuous(as is the limit π (π₯), which is identically 0), each ππ is a probability density (as each integrates to 1); however, the limit density is identically 0, and thus not a density! We can easilymodify our example so that the limit is not continuous: π π₯ if 0 π₯ 1/π 1if 1/π π₯ 1/2(1 1)ππ (π₯) (2.10) π 2 π π₯ if 1/2 π₯ 1/2 1/π 0otherwise.Note that ππ (0) 0 for all π, but as we approach 0 from above or below, in the limit we get1.(5) Change of Variables Theorem: Let π and π be bounded open sets in βπ . Let β : π πbe a 1-1 and onto map, given byβ(π’1 , . . . , π’π ) (β1 (π’1 , . . . , π’π ), . . . , βπ (π’1 , . . . , π’π )) .Let π : π β be a continuous, bounded function. Then π (π₯1 , . . . , π₯π )ππ₯1 ππ₯ππ π (β(π’1 , . . . , π’π )) π½(π’1 , . . . , π’π£ ) ππ’1 ππ’π ,(2.11)(2.12)πwhere π½ is the Jacobian π½ β1 π’1. βπ π’1 . β1 π’π. βπ π’π . (2.13)
MATH 341: TAKEAWAYS5We used this result to simplify the algebra in many problems by passing to an easier set ofvariables.3. D IFFERENTIAL E QUATIONS (M ATH 209)(1) The method of Divine Inspiration and Difference Equations: Difference equations, suchas the Fibonacci equation ππ 1 ππ 1 ππ , arise throughout nature. There is a rich theorywhen we have linear recurrence relations. To find a solution, we βguessβ that ππ ππ andtake linear combinations.Specifically, let π be a fixed integer and π1 , . . . , ππ given real numbers. Then the generalsolution of the difference equationππ 1 π1 ππ π2 ππ 1 π3 ππ 2 ππ ππ π 1isππ πΎ1 π1π πΎπ πππif the characteristic polynomialππ π1 ππ 1 π2 ππ 2 ππ 0has π distinct roots. Here the πΎ1 , . . . , πΎπ are any π real numbers; if initial conditions aregiven, these conditions determine these πΎπ βs. If there are repeated roots, we add terms suchas πππ , . . . , ππ 1 ππ , where π is the multiplicity of the root π.For example, consider the equation ππ 1 5ππ 6ππ 1 . In this case π 2 and we findthe characteristic polynomial is π2 5π 6 (π 2)(π 3), which clearly has roots π1 2and π2 3. Thus the general solution is ππ πΎ1 2π πΎ2 3π . If we are given π0 1 andπ1 2, this leads to the system of equations 1 πΎ1 πΎ2 and 2 πΎ1 2 πΎ2 3, which hasthe solution πΎ1 1 and πΎ2 0.Applications include population growth (such as the Fibonacci equation) and why doubleplus-one is a bad strategy in roulette.4. A NALYSIS (M ATH 301)(1) Continuity: General continuity properties, in particular some of the π πΏ arguments tobound quantities, are frequently used to prove results. Often we use these to study moments or other properties of densities. Most important, however, was probably when wecan interchange operations, typically interchanging integrals, sums, or an infinite sum anda derivative. For the derivative of the geometric series, this can be done by noting the tail isanother geometric series; in general this is proved by estimating the contribution from thetail of the sum). See the multivariable calculus section for more comments on these subjects.(2) Proofs by Induction: Induction is a terrific way to prove formulas for general π if we havea conjecture as to what the answer should be. Assume for each positive integer π we havea statement π (π) which we desire to show is true for all π. π (π) is true for all positiveintegers π if the following two statements hold: (i) Basis Step: π (1) is true; (ii) InductiveStep: whenever π (π) is true, π (π 1) is true. Such proofs are called proofs by inductionor induction (or inductive) proofs.
6STEVEN J. MILLER ππ(π 1)Thestandardexamplesaretoshowresultssuchas. It turns out thatπ 0 π 2 ππis a polynomial in π of degree π 1 with leading coefficient 1/(π 1) (oneπ 0 πcan see that this is reasonable by using the integral test to replace the sum with an integral);however, the remaining coefficients of the polynomial are harder to find, and without themit is quite hard to run the induction argument for say π 2009.(3) Dirichletβs Pigeonhole principle: Let π΄1 , π΄2 , . . . , π΄π be a collection of sets with the property that π΄1 π΄π has at least π 1 elements. Then at least one of the sets π΄π has atleast two elements. We frequently use the Pigeonhole principle to ensure that some eventhappens.5. P ROBABLITY T HEORY (M ATH 341)5.1. Combinatorics.(1) Combinatorics: There are several items to remember for combinatorial problems. The firstis to be careful and avoid double counting. The second is that frequently a difficult sumcan be interpreted two different ways; one of the interpretations is what we want, while theother is something we can do. We have seen many examples of this. One is that)π ( )2π ( )( πππ πππ ππ 0π 0( )is the middle coefficient of (π₯ π¦)2π , and thus equals 2π.π(2) βAuxiliary linesβ: In geometry, one frequently encounters proofs where the authors add anauxiliary line not originally in the picture; once the line is added things are clear, but it isoften a bit of a mystery as to how someone would think of adding a line in that place. Incombinatorics we have an analogue of this. Consider the classic cookie problem: we wishto divide 10 identical cookies among 5 distinct people. One simple way to do this is toimagine we have 14 (14 10 5 1) cookies, and eat 4 of them. This partitions theremaining cookies into 5 sets, with the first set going to the first person and so on.For example, if we have 10 cookies and 5 people, say we choose cookies 3, 4, 7 and 13of the 10 5 1 cookies: This corresponds to person 1 receiving two cookies, person 2 receiving zero, person 3 receiving two, person 4 receiving five and person 5 (receiving) one cookie. (πΆ π 1)10 5 1This implies that the answer to our problem is 5 1 , or in general π 1 . πΆ (π π 1). By the arguments(3) Find an interpretation: Consider the following sum:π 0π 1above, we are summing the number of ways of dividing π cookies among π people forπ {0, . . . , πΆ} (or we divide πΆ cookies among π people, but we do not assume eachcookie is given). A nice way to solve this is to imagine that there is a π 1st person whoreceives πΆ π cookies, in which case this sum is now the same as counting the number ofways of dividing(πΆ π πΆ) cookies among π 1 people where each cookie must be assigned toa person, or π . (See also the βtell a storyβ entry in Β§5.2 and the βconvolutionβ entry in
MATH 341: TAKEAWAYS7Β§5.3.)(4) Inclusion - Exclusion Principle: Suppose π΄1 , π΄2 , . . . , π΄π is a collection of sets. Then theInclusion-Exclusion Principle asserts that π π΄π π΄π π΄π π΄π π΄π π΄π . π΄π π 1ππ,ππ,π,πThis has many uses for counting probabilities. We used it to determine the probability of ageneric integer is square-free, as well as the probability a random permutation of {1, . . . , π}returns at least one element to its initial location.(5) Binomial Theorem: We haveπ ( )π ( ) π π π ππ π π ππ(π₯ π¦) π₯ π¦ π₯ π¦ ;πππ 0π 0( )π!in probability we usually take π₯ π and π¦ 1 π. The coefficients ππ π!(π π)!havethe interpretation as counting the number of ways of choosing π objects from π when orderdoes not matter. A better definition of this coefficient is( )ππ(π 1) (π (π 1)) .ππ(π 1) 1(3)The reason this definition is( superioristhatmakes sense with this definition, and is just5)zero. One can easily show ππ 0 whenever π π, which makes sense with our combinatorial interpretation: there is no way to choose π objects from π when π π, regardless ofwhether or not order matters.5.2. General Techniques of Probability.(1) Differentiating Identities: Equalities are the bread and butter of mathematics; differentiating identities allows us to generate infinitely many more from one, which is a very gooddeal! For example, consider the identityπ ( ) π π π ππ(π π) π π .(5.1)ππ 0πApplying the operator π ππto both sides we findπ 1π π(π π)( )π π π π π ππ π .ππ 0Setting π 1 π yields the mean of a binomial random variable:( )π π πππ ππ (1 π)π π .ππ 0(5.2)(5.3)It is very important that initially π and π are distinct, free variables, and only at the end do πwe set π 1 π. Anotherexampleisdifferentiatingπ 0 π₯ 1/(1 π₯) by applying the ππ2thoperator π₯ ππ₯gives π 0 ππ₯ π₯/(1 π₯) . While we can prove the 2π moment of the
8STEVEN J. MILLERstandard normal is (2π 1)!! by induction, we can also do this with differentiating identities.(2) Law of Total Probability: This is perhaps one of the most useful observations: Prob(π΄c ) 1 Prob(π΄), where π΄c is the complementary event. It is frequently easier to compute theprobability that something does not happen than the probability it does. Standard examplesinclude hands of bridge or other card games.(3) Fundamental Theorem of Calculus (cumulative distribution functions and densities):One of the most important uses of the Fundament Theorem of Calculus is the relationshipbetween the cumulative distribution function πΉπ of a random variable π and its density ππ .We have π₯πΉπ (π₯) Prob(π π₯) ππ (π‘)ππ‘. In particular, the Fundamental Theorem of Calculus implies that πΉπβ² (π₯) ππ (π₯). Thismeans that if we know the cumulative distribution function, we can essentially deduce thedensity. For example, let π have the standard exponential density (so ππ (π₯) π π₯ forπ₯ 0 and 0 otherwise) and set π π 2 . Then for π¦ 0 we have πΉπ (π¦) Prob(π π¦) Prob(π 2 π¦) Prob(π π¦) πΉπ ( π¦).We now differentiate, using the Fundamental Theorem of Calculus and the Chain Rule, andfind that for π¦ 0ππ (π¦) πΉπβ² ( π¦) π 1π π¦ ( π¦) ππ₯ ( π¦) .ππ¦2 π¦2 π¦(4) Binary (or indicator) random variables: For many problems, it is convenient to define arandom variable to be 1 if the event of interest happens and 0 otherwise. This frequentlyallows us to reduce a complicated problem to many simpler problems. For example, consider a binomial process with parameters π and π. We may view this as flipping a coinwith probability π of heads a total of π times, and recording the number of heads. Wemay let ππ 1 if the πth toss is heads and 0 otherwise; then the total number of heads isπ π1 ππ . In other words, we have represented a binomial random variable withparameters π and π as a sum of π independent Bernoulli random variables. This facilitatescalculating quantities such as the mean or variance, as we now have πΌ[π] ππΌ[ππ ] ππand Var(π) πVar(ππ ) ππ(1 π). Explicitly, to compute the mean we need to evaluate πΌ[ππ ] 1 π 0 (1 π) and then multiply by π; this is significantly easier thandirectly( ) the mean of the binomial random variable, which requires us to deter evaluatingmine ππ 0 π ππ ππ (1 π)π π .(5) Linearity of Expectation: One of the worst complications in probability is that randomvariables might not be independent. This greatly complicates the analysis in a variety ofcases; however, if all we care about is the expected value, these difficulties can vanish! Thereason is that the expected values of a sum is the sum of the expected values; explicitly, ifπ π1 ππ then πΌ[π] πΌ[π1 ] πΌ[ππ ]. One great example of this wasin the coupon or prize problem. Imagine we have π different prizes, and each day we arerandomly given one and only of the π prizes. We assume the choice of prize is independent
MATH 341: TAKEAWAYS9of what we have, with each prize being chosen with probability 1/π. How long will it taketo have one of each prize? If we let ππ denote the random variable which is how long wemust wait, given π 1 prizes, until we obtain the next new prize, then ππ is a geometricπand expected value π1π π (π 1). Thus therandom variable with parameter ππ 1 π 1πexpected number of days we must wait until we have one of each prize is simplyπΌ[π] π 1 π 1πΌ[ππ ] π 1 π 1π 1π π ππ»π ,π (π 1)ππ 1where π»π 1/1 1/2 1/π is the πth harmonic number (and π»π log π for π large).Note we do not need to consider elaborate combinations or how the prizes are awarded. Ofcourse, if we want to compute the variance or the median, itβs a different story and we canβtjust use linearity of expectation.(6) Bring it Over: We have seen two different applications of this method. One is in evaluatingintegrals. Let πΌ be a complicated integral. What often happens is that, after some number ofintegration by parts, we obtain an expression of the form πΌ π ππΌ; so long as π 1 we canπrewrite this as (1 π)πΌ π and then solve for πΌ (πΌ 1 π). This frequently occurs for integrals involving sines and cosines, as two derivatives (or integrals) basically returns us to ourstarting point. We also saw applications of this in memoryless games, to be described below.(7) Memoryless games / processes: There are many situations where to analyze future behavior, we do not need to know how we got to a given state or configuration, but ratherjust what the current game state is. A terrific example is playing basketball, with the firstperson to make a basket winning. Say π΄ shoots first and always gets a basket with probability π, and π΅ shoots second and always makes a basket with probability π. π΄ and π΅ keepshooting, π΄ then π΅ then π΄ then π΅ and so on, until someone makes a basket. What is theprobability π΄ wins? The long was is to note that the probability π΄ wins on her πth shot is((1 π)(1 π))π 1 π, and thusProb(π΄ wins) ((1 π)(1 π))π 1 π;π 0while we can evaluate this with the geometric series, there is an easier way. How can π΄win? She can win by making her first basket, which happens with probability π. If shemisses, then to win she needs π΅ to miss as well. At this point, it is π΄βs turn to shoot again,and it is as if weβve just started the game. It does not matter that both have missed! ThusProb(π΄ wins) π (1 π)(1 π)Prob(π΄ wins).Note this is exactly the set-up for using βBring it overβ, and we findπProb(π΄ wins) ;1 (1 π)(1 π)in fact, we can use this to provide a proof of the geometric series formula! The key ideahere is that once both miss, it is as if weβve just started the game. This is a very fruitful wayof looking at many problems.
10STEVEN J. MILLER(8) Standardization: Given a random variable π with finite mean and variance, it is almost always a good idea to consider the standardized random variable π (π πΌ[π])/StDev(π),especially if π is a sum of independent random variables. The reason is that π now hasmean 0 and variance 1, and this sets us up to compare quantities on the same scale. Equivalently, when we discuss the Central Limit Theorem everything will converge to the samedistribution, a standard normal. We thus will only need to tabulate the probabilities for onenormal, and not a plethora or even an infinitude. The situation is similar to logarithm tables.We only need to know logarithms in one base to know them in all, as the Change of Baseformula gives logπ π₯ logπ π₯/ logπ π (and thus if we know logarithms in base π, we knowthen in base π).)((1 π)π ππ for(9) Tell a story: One of our exam questions was whether or not π (π) π π 1ππ {0, 1, 2, . . . }, π (0, 1) is a probability mass function. One way to approach a problemlike this is to try and tell a story. How should we interpret the factors? Well, letβs make π theprobability of getting a head when we toss a coin, or we could let it denote the probabilityπ πof a success. Then (1) π) π is the probability of a string with exactly π failures and π(π successes. There are π ways to choose which π of π π places to be the failures; however,()we have π π 1. Whatβs going on? The difference is that we are not considering all possiπble strings, but only strings where the last event is a success. Thus we must have exactly πfailures (or exactly π 1 successes) in the first π π 1 tosses followed by a success on trialπ π. By finding a story like this, we know it is a probability mass function; it is possible todirectly sum this, but that is significantly harder. (See also the βfind an interpretationβ entryin Β§5.1 and the βconvolutionβ entry in Β§5.3.)(10) Probabilistic Models: We can often gain intuition about complex but deterministic phenomena by employing a random model. For example, the Prime Number Theorem tells usthat there are about π₯/ log π₯ primes at most π₯, leading to the estimation that any π is primewith probability about 1/ log π (this is known as the Cramer model). Using this, we canestimate various number theoretic quantities. For example, let ππ be a random binary indicator variable which is 1 with probability log1 π and 0 with probability 1 log1 π . If we want toestimate how many numbers up to π₯ start a twin prime pair (i.e., π and π 2 are both prime)then the answer would be given by the random variable π π2 π4 π3 π5 ππ 2 ππ .As everything is independent and πΌ[ππ ] log1 π , we haveπΌ[π] π 2 π 2πΌ[ππ ]πΌ[ππ 2 ] π 2 π 21 log(π) log(π 2) π 22ππ‘π₯ .2log π‘log2 π₯The actual (conjectured!) answer is about πΆ2 π₯/ log2 π₯, where π(π 2) .66016.πΆ2 (π 1)2π 3π primeWhatβs important is to note that the simple heuristic did capture the correct π₯ dependence,namely a constant times π₯/ log2 π₯. Of course, one must be very careful about how far onepushes and trusts these models. For example, it would predict there are about πΆ3 π₯/ log3 π₯prime triples (π, π 2, π 4) up to π₯ for some non-zero πΆ3 , whereas in actuality there
MATH 341: TAKEAWAYS11is only the triple (3, 5, 7)! The problem is this model misses arithmetic, and in any threeconsecutive odd numbers exactly one of them is divisible by 3.(11) Simplifying sums: Often we encounter a sum which is related to a standard sum; thisis particularly true in trying to evaluate moment generation functions. Some of the morecommon (and important) identities are π₯ππ₯2 π₯3π₯π 1 π₯ 2!3!π!π 011 π₯1(1 π₯)2 1 π₯ π₯2 π₯3 π₯ππ 0 1(1 π₯)π (π₯ π¦)π 1 2π₯ 3π₯3 4π₯3 ( π 0 ( ) ππ 0)π π ππ₯π1π₯π 1π(π 1) π 2 2π₯π ππ₯π 1 π¦ π₯ π¦2π ( )π ( ) π π π ππ π π π π₯ π¦ π₯ π¦ .πππ 0π 0The goal is to βseeβ a complicated expression is one of the above (for a special choiceof π₯). For example, let π be a Poisson with parameter π; thus ππ (π) π₯ππ π π /π! ifπ {0, 1, 2, . . . } and 0 otherwise. Then ππ π ππ‘πππ (π‘) πΌ[π ] ππ‘π .π!π 0Fortunately, this looks like one of the expressions above, namely the one for ππ₯ . Rearranginga bit gives ( )()(πππ‘ )π πππ (π‘) π π π exp πππ‘ exp πππ‘ π .π!π 05.3. Moments.(1) Convolution: Let π and π be independent random variables with densities ππ and ππ .Then the density of π π is ππ π (π’) (ππ ππ )(π’) : ππ (π’)ππ (π‘ π’)ππ’; we call ππ ππ the convolution of π and π . While we can prove by brute force thatππ ππ ππ ππ , a faster interpretation is obtained by noting that since addition iscommutative, π π π π and hence ππ π ππ π , which implies convolution iscommutative. Convolutions give us a handle on the density for sums of independent randomvariables, and is a key ingredient in the proof of the Central Limit Theorem.
12STEVEN J. MILLER(2) Generating Functions: Given a sequence {ππ } π 0 , we define its generating function byπΊπ (π ) ππ π ππ 0for all π where the sum converges. For discrete random variables that take on values at thenon-negative integers, an excellent choice is to take ππ Prob(π π), and the resultis called the generating function of the random variable π. Using convolutions, we findthat if π1 and π2 be independent discrete random variables taking on non-negative integer values, with corresponding probability generating functions πΊπ1 (π ) and πΊπ2 (π ), thenπΊπ1 π2 (π ) πΊπ1 (π )πΊπ2 (π ).(3) Moment Generating Functions: For many probability problems, the moment generatingfunction ππ (π‘) is more convenient to study than the generating function. It is defined byππ (π‘) πΌ[ππ‘π ], which implies (if everything converges!) thatπβ² π‘2 πβ² π‘3ππ (π‘) 1 πβ²1 π‘ 2 3 ,2!3! where πβ²π ππ ππ (π‘)/ππ‘π is the π th moment of π. Key properties of the momentπ‘ 0generating function are: (i) Let πΌ and π½ be constants. ThenππΌπ π½ (π‘) ππ½π‘ ππ (πΌπ‘).(ii) if π1 , . . . , ππ are independent random variables with moment generating functionsπππ (π‘) which converge for π‘ πΏ, thenππ1 ππ (π‘) ππ1 (π‘)ππ2 (π‘) πππ (π‘).If the random variables all have the same moment generating function ππ (π‘), then theright hand side becomes ππ (π‘)π . Unfortunately the moment generating function does notalways exist in a neighborhood of the origin (this can be seen by considering the Cauchydistribution); this is rectified by studying the characteristic function, πΌ[πππ‘π ], which is essentially the Fourier transform of the density (that is πΌ[π 2πππ‘π ]).(4) Moment Problem: When does a sequence of moments uniquely determine a probabilitydensity? If our distribution is discrete and takes on only finitely many (for definiteness,say π ) values, then only finitely many moments are needed. If the density is continuous,however, infinitely many might not be enough. Considerπ1 (π₯) π2 (π₯) 1π (log2π₯)/22ππ₯2π1 (π₯) [1 sin(2π log π₯)] .2These two densities have the same integral moments (their π th moments are ππ /2 for π a nonnegative integer); while they also have the same half-integral moments, all other momentsdiffer (thus there is no sequence of moments where they agree which has an accumulationpoint; see Β§6). Thus it is possible for two densities to have the same integral moments butdiffer.
MATH 341: TAKEAWAYS135.4. Approximations and Estimations.(1) Cauchy-Schwarz inequality: For complex-valued functions π and π,( 1) 12 ( 1) 12 122 π (π₯)π(π₯) ππ₯ π (π₯) ππ₯ π(π₯) ππ₯ .000One of my favorite applications of this was proving the absolute value of the covariance ofπ and π is at most the product of the square-rootsThe key step in the of the variances. proof was writing the joint density ππ,π (π₯, π¦) as ππ,π (π₯, π¦) ππ,π (π₯, π¦) and putting onefactor with π₯ ππ and one with π¦ ππ . The reason we do this is we cannot directlyintegrate π₯2 or π₯ ππ 2 ; we need to hit it with a probability density in order to have achance of getting a finite value. This explains why we write the density as a product of itssquare root with its square root; it allows us to use Cauchy-Schwarz.(2) Stirlingβs Formula: Almost any combinatorial problem involves factorials, either directlyor through binomial coefficients. It is essential to be able to estimate π! for large π. Stirlingβsformula says() 11139π ππ! π π2ππ 1 ;12π 288π2 51840π3 thus for π large, π! (π/π)2 2ππ. There are many ways to prove this, the most commonbeing complex analysis or stationary phase. We can get a ballpark estimate by βsummifyingβ. We have π! e
tions (locally) by simpler ones. The moment generating function of a random variable is a Taylor series whose coefο¬cients are the moments of the distribution. Another instance where we used this is in proving the Central Limit Theorem. The moment generating func-tion of a sum of independent random variables is the product of the moment generating
Math 5/4, Math 6/5, Math 7/6, Math 8/7, and Algebra 1/2 Math 5/4, Math 6/5, Math 7/6, Math 8/7, and Algebra Β½ form a series of courses to move students from primary grades to algebra. Each course contains a series of daily lessons covering all areas of general math. Each lesson
MATH 210 Single Variable Calculus I Early Transcendentals (4) o Allan Hancock College : MATH 181 Calculus 1 5 o American River College : MATH 400 Calculus I 5 o Berkeley City College : MATH 3A Calculus I 5 o Cabrillo College : MATH 5A Analytic Geometry and Calculus I 5 o Canada College : MATH 251 Analytical Geometry and Calculus I 5
MATH 110 College Algebra MATH 100 prepares students for MATH 103, and MATH 103 prepares students for MATH 110. To fulfil undergraduate General Education Core requirements, students must successfully complete either MATH 103 or the higher level MATH 110. Some academic programs, such as the BS in Business Administration, require MATH 110.
GED/ABE Claremore Classes 918.691.0469 Community Action Head Start 918.343.2960 Rogers County Literacy Council 918.341.2340 Cherokee Nation Adult Education 918.266.5652 VOLUNTEERING American Legion 918.341.1330 Or 918.341.2146 American Red Cros
math-drills.com math-drills.com math-drills.com math-drills.com math-drills.com math-drills.com math-drills.com math-drills.com math-drills.com Making Number Patterns (C) I
2016 MCAS Results September 29, 2016 Page 4 8 Year Math CPI Results For State, District, and Schools Ranked by 2016 CPI School el 2009 Math MCAS 2010 Math MCAS 2011 Math MCAS 2012 Math MCAS 2013 Math MCAS 2014 Math MCAS 2015 Math MCAS 2016 Math PARCC Sewell-Anderson 1 80.0 78.7 76.7 84.2 88.3 89.0 89.3 92.5
MPA ISO MOTION TO APPROVE PROPOSED CONSENT JUDGMENT CASE NO. RG21112735 LUCAS WILLIAMS (State Bar No. 264518) JACOB JANZEN (State Bar No. 313474) WILLIAMS ENVIRONMENTAL LAW 490 43rd Street, #23 Oakland, CA 94609 Email: lucas@williams-envirolaw.com Email: jake@williams-envirolaw.com Telephone: (707) 849-5198 Fax: (510) 609-3360
Obesity myths 443 Top 10 takeaways: Bariatric surgery nutrient considerations 548 Investigational antiβobesity agents 477 Top 10 takeaways: Microbiome 577 Top 10 takeaways: Antiβobesity drug research 478 Obesity Medici