Numerical Analysis II – Lecture Notes

3y ago
44 Views
4 Downloads
3.30 MB
87 Pages
Last View : 30d ago
Last Download : 3m ago
Upload by : Troy Oden
Transcription

Numerical Analysis II – Lecture NotesAnthony YeatesDurham UniversityMarch 12, 2018

Contents0 What is numerical analysis?40.1Direct or iterative methods? . . . . . . . . . . . . . . . . . . . . . . . . . . . .40.2Course outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 Floating-point arithmetic71.1Fixed-point numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71.2Floating-point numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71.3Significant figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91.4Rounding error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91.5Loss of significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 Polynomial interpolation122.1Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122.2Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152.3Lagrange form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162.4Newton/divided-difference form . . . . . . . . . . . . . . . . . . . . . . . . . .182.5Interpolation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212.6Convergence and the Chebyshev nodes . . . . . . . . . . . . . . . . . . . . . .232.7Derivative conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 Differentiation293.1Higher-order finite differences . . . . . . . . . . . . . . . . . . . . . . . . . . .303.2Rounding error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313.3Richardson extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324 Nonlinear equations354.1Interval bisection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354.2Fixed point iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .374.3Orders of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394.4Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .404.5Newton’s method for systems . . . . . . . . . . . . . . . . . . . . . . . . . . .434.6Aitken acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444.7Quasi-Newton methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46Numerical Analysis II - ARY22017-18 Lecture Notes

5 Linear equations495.1Triangular systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .495.2Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .515.3LU decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525.4Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .555.5Vector norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575.6Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .585.7Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625.8Iterative methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .636 Least-squares approximation686.1Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .686.2Discrete least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696.3QR decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .726.4Continuous least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .746.5Orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .767 Numerical integration787.1Newton-Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .797.2Composite Newton-Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . .807.3Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .827.4Gaussian quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83Numerical Analysis II - ARY32017-18 Lecture Notes

0What is numerical analysis?Numerical analysis is the study of algorithmsfor the problems of continuous mathematics.(from The definition of numerical analysis, L. N. Trefethen)The goal is to devise algorithms that give quick and accurate answers to mathematical problemsfor scientists and engineers, nowadays using computers.The word continuous is important: numerical analysis concerns real (or complex) variables, asopposed to discrete variables, which are the domain of computer science.0.1Direct or iterative methods?Some problems can be solved by a finite sequence of elementary operations: a direct method.Example Solve a system of simultaneous linear equations.! !!x 2y 01.00 2.00 x0.00 2x πy 12.00 3.14 y1.00Gaussian elimination . . .Even in this simple example, we hit upon one problem: π is a transcendental number thatcan’t be represented exactly in a computer with finite memory. Instead, we will see that thecomputer uses a floating-point approximation, which incurs rounding error.In direct methods, we only have to worry about rounding error, and computational time/memory.But, unfortunately, most problems of continuous mathematics cannot be solved by a finite algorithm.Example Evaluate sin(1.2).We could do this with a Maclaurin series:sin(1.2) 1.2 (1.2) 3 (1.2) 5 (1.2) 7 .3!5!7!To 8 decimal places, we get the partial 0.93203909This is an iterative method – keep adding an extra term to improve the approximation. Iterativemethods are the only option for the majority of problems in numerical analysis, and mayactually be quicker even when a direct method exists.I The word “iterative” derives from the latin iterare, meaning “to repeat”.Numerical Analysis II - ARY42017-18 Lecture Notes

Even if our computer could do exact real arithmetic, there would still be an error resultingfrom stopping our iterative process at some finite point. This is called truncation error. Wewill be concerned with controlling this error and designing methods which converge as fastas possible. Example The famous 2 tablet from the Yale Babylonian Collection (photo: Bill Casselman, http://www.math.ubc.ca/ cass/Euclid/ybc/ybc.html).This is one of the oldest extant mathematical diagrams, dated approximately to 1800-1600 BC. The numbers along the diagonal of the square approximate 2 in base 60 (sexagesimal):24 5110 2 3 1.41421296 to 9 s.f.60 6060 This is already a good approximation to the true value 2 1.41421356 – much better thancould be achieved by ruler and pencil measurements!1 I The other numbers on the tablet relate to the calculation of the diagonal length for a squareof side 30, which is 25 35 .30 2 42 60 602 Example Iterative method for 2.It is probable that the Babylonians used something like the following iterative method. Startwith an initial guess x 0 1.4. Then iterate the following formula:xk 1 xk1 2 xkx 1 1.4142857 . . .x 2 1.414213564 . . . I This method is also known as Heron’s method, after a Greek mathematician who describedit in the first century AD.I Notice that the method converges extremely rapidly! We will explain this later in the coursewhen we discuss rootfinding for nonlinear equations.Numerical Analysis II - ARY52017-18 Lecture Notes

0.2Course outlineIn this course, we will learn how to do many common calculations quickly and accurately. Inparticular:1.2.3.4.5.6.Floating-point arithmetic (How do we represent real numbers on a computer?)Polynomial interpolation (How do we represent mathematical functions on a computer?)Numerical differentiation (How do we calculate derivatives?)Nonlinear equations (How do we find roots of nonlinear equations?)Linear equations (How do we solve linear systems?)Least-squares approximation (How do we find approximate solutions to overdeterminedsystems?)7. Numerical integration (How do we calculate integrals?)One area we won’t cover is how to solve differential equations. This is such an important topicthat it has its own course Numerical Differential Equations III/IV.Numerical Analysis II - ARY62017-18 Lecture Notes

1Floating-point arithmeticHow do we represent numbers on a computer?Integers can be represented exactly, up to some maximum size.Example 64-bit integers.If 1 bit (binary digit) is used to store the sign , the largest possible number is1 262 1 261 . . . 1 21 1 20 263 1.I In Python, this is not really a worry. Even though the maximum size of a normal (32-bit)integer is 231 1, larger results will be automatically promoted to “long” integers.By contrast, only a subset of real numbers within any given interval can be represented exactly.1.1Fixed-point numbersIn everyday life, we tend to use a fixed point representationx (d 1d 2 · · · dk 1 .dk · · · dn )β ,where d 1 , . . . , dn {0, 1, . . . , β 1}.(1.1)Here β is the base (e.g. 10 for decimal arithmetic or 2 for binary).Example (10.1)2 1 21 0 20 1 2 1 2.5.If we require that d 1 , 0 unless k 2, then every number has a unique representation of thisform, except for infinite trailing sequences of digits β 1.Example 3.1999 · · · 3.2.1.2Floating-point numbersComputers use a floating-point representation. Only numbers in a floating-point number system F R can be represented exactly, wherenoF (0.d 1d 2 · · · dm )β β e β, di , e Z, 0 di β 1, e min e e max .(1.2)Here (0.d 1d 2 · · · dm )β is called the fraction (or significand or mantissa), β is the base, and e isthe exponent. This can represent a much larger range of numbers than a fixed-point systemof the same size, although at the cost that the numbers are not equally spaced. If d 1 , 0 theneach number in F has a unique representation and F is called normalised.Example Floating-point number system with β 2, m 3, e min 1, e max 2.Numerical Analysis II - ARY72017-18 Lecture Notes

I Notice that the spacing between numbers jumps by a factor β at each power of β. Thelargest possible number is (0.111)2 22 ( 12 14 18 )(4) 27 . The smallest non-zero number is(0.100)2 2 1 12 ( 12 ) 14 .Example IEEE standard (1985) for double-precision (64-bit) arithmetic.Here β 2, and there are 52 bits for the fraction, 11 for the exponent, and 1 for the sign. Theactual format used is (1.d 1 · · · d 52 )2 2e 1023 (0.1d 1 · · · d 52 )2 2e 1022 ,e (e 1e 2 · · · e 11 )2 .When β 2, the first digit of a normalized number is always 1, so doesn’t need to be storedin memory. The exponent bias of 1022 means that the actual exponents are in the range 1022to 1025, since e [0, 2047]. Actually the exponents 1022 and 1025 are used to store 0 and respectively.The smallest non-zero number in this system is (0.1)2 2 1021 2.225 10 308 , and the largestnumber is (0.1 · · · 1)2 21024 1.798 10308 . I IEEE stands for Institute of Electrical and Electronics Engineers. This is the default system in Python/numpy. The automatic 1 is sometimescalled the “hidden bit”. The exponent bias avoids the need to store the sign of the exponent.Numbers outside the finite set F cannot be represented exactly. If a calculation falls below thelower non-zero limit (in absolute value), it is called underflow, and usually set to 0. If it fallsabove the upper limit, it is called overflow, and usually results in a floating-point exception.I e.g. in Python, 2.0**1025 leads to an exception.I e.g. Ariane 5 rocket failure (1996). The maiden flight ended in failure. Only 40 secondsafter initiation, at altitude 3700m, the launcher veered off course and exploded. The causewas a software exception during data conversion from a 64-bit float to a 16-bit integer. Theconverted number was too large to be represented, causing an exception.I In IEEE arithmetic, some numbers in the “zero gap” can be represented using e 0, sinceonly two possible fraction values are needed for 0. The other fraction values may be usedwith first (hidden) bit 0 to store a set of so-called subnormal numbers.The mapping from R to F is called rounding and denoted fl(x ). Usually it is simply the nearestnumber in F to x. If x lies exactly midway between two numbers in F , a method of breakingties is required. The IEEE standard specifies round to nearest even – i.e., take the neighbourwith last digit 0 in the fraction. I This avoids statistical bias or prolonged drift.Example Our toy system from earlier.95118 (1.001)2 has neighbours 1 (0.100)2 2 and 4 (0.101)2 2 , so is rounded down to11533118 (1.011)2 has neighbours 4 (0.101)2 2 and 2 0.110)2 2 , so is rounded up to 2 .Numerical Analysis II - ARY81.2017-18 Lecture Notes

I e.g. Vancouver stock exchange index. In 1982, the index was established at 1000. ByNovember 1983, it had fallen to 520, even though the exchange seemed to be doing well. Explanation: the index was rounded down to 3 digits at every recomputation. Since the errorswere always in the same direction, they added up to a large error over time. Upon recalculation, the index doubled!1.3Significant figuresWhen doing calculations without a computer, we often use the terminology of significant figures. To count the number of significant figures in a number x, start with the first non-zerodigit from the left, and count all the digits thereafter, including final zeros if they are after thedecimal point.Example 3.1056, 31.050, 0.031056, 0.031050, and 3105.0 all have 5 significant figures (s.f.).To round x to n s.f., replace x by the nearest number with n s.f. An approximation x̂ of x is“correct to n s.f.” if both x̂ and x round to the same number to n s.f.1.4Rounding errorIf x lies between the smallest non-zero number in F and the largest number in F , thenfl(x ) x (1 δ ),(1.3)where the relative error incurred by rounding is δ fl(x ) x . x (1.4)I Relative errors are often more useful because they are scale invariant. E.g., an error of 1hour is irrelevant in estimating the age of this lecture theatre, but catastrophic in timing yourarrival at the lecture.Now x may be written as x (0.d 1d 2 · · · )β β e for some e [e min , e max ], but the fraction willnot terminate after m digits if x F . However, this fraction will differ from that of fl(x ) by atmost 21 β m , so fl(x ) x 12 β m β e δ 12 β 1 m .(1.5)Here we used that the fractional part of x is at least (0.1)β β 1 . The number ϵM 12 β 1 m iscalled the machine epsilon (or unit roundoff ), and is independent of x. So the relative roundingerror satisfies δ ϵM .(1.6)I Sometimes the machine epsilon is defined without the factor 21 . For example, in Python,print(np.finfo(np.float64).eps).I The name “unit roundoff” arises because β 1 m is the distance between 1 and the next numberin the system.Numerical Analysis II - ARY92017-18 Lecture Notes

Example For our system with β 2, m 3, we have ϵM 21 .21 3 81 . For IEEE doubleprecision, we have β 2 and m 53 (including the hidden bit), so ϵM 21 .21 53 2 53 1.11 10 16 .When adding/subtracting/multiplying/dividing two numbers in F , the result will not be in Fin general, so must be rounded.Example Our toy system again (β 2, m 3, e min 1, e max 2).Let us multiply x 85 and y 78 . We havexy 3564 12 132 164 (0.100011)2 .This has too many significant digits to represent in our system, so the best we can do is roundthe result to fl(xy) (0.100)2 12 .I Typically additional digits are used during the computation itself, as in our example.For , , , , IEEE standard arithmetic requires rounded exact operations, so thatfl(x y) (x y)(1 δ ),1.5 δ ϵM .(1.7)Loss of significanceYou might think that (1.7) guarantees the accuracy of calculations to within ϵM , but this is trueonly if x and y are themselves exact. In reality, we are probably starting from x̄ x (1 δ 1 )and ȳ y(1 δ 2 ), with δ 1 , δ 2 ϵM . In that case, there is an error even before we round theresult, sincex̄ ȳ x (1 δ 1 ) y(1 δ 2 )!xδ 1 yδ 2 (x y) 1 .x y(1.8)(1.9)If the correct answer x y is very small, then there can be an arbitrarily large relative errorin the result, compared to the errors in the initial x̄ and ȳ. In particular, this relative error canbe much larger than ϵM . This is called loss of significance, and is a major cause of errors infloating-point calculations.Example Quadratic formula for solving x 2 56x 1 0.To 4 s.f., the roots are x 1 28 783 55.98, x 2 28 783 0.01786. However, working to 4 s.f. we would compute 783 27.98, which would lead to the resultsx̄ 1 55.98,x̄ 2 0.02000.The smaller root is not correct to 4 s.f., because of cancellation error. One way around this isto note that x 2 56x 1 (x x 1 )(x x 2 ), and compute x 2 from x 2 1/x 1 , which gives thecorrect answer.Numerical Analysis II - ARY102017-18 Lecture Notes

I Note that the error crept in when we rounded 783 to 27.98, because this removed digitsthat would otherwise have been significant after the subtraction.Example Evaluate f (x ) ex cos(x ) x for x very near zero.Let us plot this function in the range 5 10 8 x 5 10 8 – even in IEEE double precisionarithmetic we find significant errors, as shown by the blue curve:The red curve shows the correct result approximated using the Taylor series!!x2 x3x2 x4f (x ) 1 x . 1 . x2! 3!2! 4!x3 x2 .6This avoids subtraction of nearly equal numbers.I We will look in more detail at polynomial approximations in the next section.Note that floating-point arithmetic violates many of the usual rules of real arithmetic, such as(a b) c a (b c).Example In 2-digit decimal arithmetic,fgfgfl (5.9 5.5) 0.4 fl fl(11.4) 0.4 fl(11.0 0.4) 11.0,fgfgfl 5.9 (5.5 0.4) fl 5.9 5.9 fl(11.8) 12.0.Example The average of two numbers.In R, the average of two numbers always lies between the numbers. But if we work to 3decimal digits, 5.01 5.02 fl(10.03) 10.0fl 5.0.222The moral of the story is that sometimes care is needed.Numerical Analysis II - ARY112017-18 Lecture Notes

2Polynomial interpolationHow do we represent mathematical functions on a computer?If f is a polynomial of degree n,f (x ) pn (x ) a 0 a 1x . . . an x n ,(2.1)then we only need to store the n 1 coefficients a 0 , . . . , an . Operations such as taking thederivative or integrating f are also convenient. The idea in this chapter is to find a polynomialthat approximates a general function f . For a continuous function f on a bounded interval,this is always possible if you take a high enough degree polynomial:Theorem 2.1 (Weierstrass Approximation Theorem, 1885). For any f C ([0, 1]) and anyϵ 0, there exists a polynomial p(x ) such thatmax f (x ) p(x ) ϵ.0 x 1I This may be proved using an explicit sequence of polynomials, called Bernstein polynomials.The proof is beyond the scope of this course, but see the extra handout for an outline.If f is not continuous, then something other than a polynomial is required, since polynomialscan’t handle asymptotic behaviour.I To approximate functions like 1/x, there is a well-developed theory of rational functioninterpolation, which is beyond the scope of this course.In this chapter, we look for a suitable polynomial pn by interpolation – that is, requiringpn (xi ) f (xi ) at a finite set of points xi , usually called nodes. Sometimes we will also requirethe derivative(s) of pn to match those of f . In Chapter 6 we will see an alternative approach, appropriate for noisy data, where the overall error f (x ) pn (x ) is minimised, without requiringpn to match f at specific points.2.1Taylor seriesA truncated Taylor series is (in some sense) the simplest interpolating polynomial since it usesonly a single node x 0 , although it does require pn to match both f and some of its derivatives.Example Calculating sin(0.1) to 6 s.f. by Taylor series.We

methods are the only option for the majority of problems in numerical analysis, and may actually be quicker even when a direct method exists. I„e word “iterative” derives from the latin iterare, meaning “to repeat”. Numerical Analysis II - ARY 4 2017-18 Lecture Notes

Related Documents:

Introduction of Chemical Reaction Engineering Introduction about Chemical Engineering 0:31:15 0:31:09. Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Lecture 27 Lecture 28 Lecture

GEOMETRY NOTES Lecture 1 Notes GEO001-01 GEO001-02 . 2 Lecture 2 Notes GEO002-01 GEO002-02 GEO002-03 GEO002-04 . 3 Lecture 3 Notes GEO003-01 GEO003-02 GEO003-03 GEO003-04 . 4 Lecture 4 Notes GEO004-01 GEO004-02 GEO004-03 GEO004-04 . 5 Lecture 4 Notes, Continued GEO004-05 . 6

Lecture 1: A Beginner's Guide Lecture 2: Introduction to Programming Lecture 3: Introduction to C, structure of C programming Lecture 4: Elements of C Lecture 5: Variables, Statements, Expressions Lecture 6: Input-Output in C Lecture 7: Formatted Input-Output Lecture 8: Operators Lecture 9: Operators continued

2 Lecture 1 Notes, Continued ALG2001-05 ALG2001-06 ALG2001-07 ALG2001-08 . 3 Lecture 1 Notes, Continued ALG2001-09 . 4 Lecture 2 Notes ALG2002-01 ALG2002-02 ALG2002-03 . 5 Lecture 3 Notes ALG2003-01 ALG2003-02 ALG

“numerical analysis” title in a later edition [171]. The origins of the part of mathematics we now call analysis were all numerical, so for millennia the name “numerical analysis” would have been redundant. But analysis later developed conceptual (non-numerical) paradigms, and it became useful to specify the different areas by names.

Lecture 1: Introduction and Orientation. Lecture 2: Overview of Electronic Materials . Lecture 3: Free electron Fermi gas . Lecture 4: Energy bands . Lecture 5: Carrier Concentration in Semiconductors . Lecture 6: Shallow dopants and Deep -level traps . Lecture 7: Silicon Materials . Lecture 8: Oxidation. Lecture

TOEFL Listening Lecture 35 184 TOEFL Listening Lecture 36 189 TOEFL Listening Lecture 37 194 TOEFL Listening Lecture 38 199 TOEFL Listening Lecture 39 204 TOEFL Listening Lecture 40 209 TOEFL Listening Lecture 41 214 TOEFL Listening Lecture 42 219 TOEFL Listening Lecture 43 225 COPYRIGHT 2016

Partial Di erential Equations MSO-203-B T. Muthukumar tmk@iitk.ac.in November 14, 2019 T. Muthukumar tmk@iitk.ac.in Partial Di erential EquationsMSO-203-B November 14, 2019 1/193 1 First Week Lecture One Lecture Two Lecture Three Lecture Four 2 Second Week Lecture Five Lecture Six 3 Third Week Lecture Seven Lecture Eight 4 Fourth Week Lecture .