Numerical Analysis Module 4 Solving Linear Algebraic

2y ago
17 Views
3 Downloads
426.42 KB
62 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Lilly Kaiser
Transcription

Numerical Analysis Module 4Solving Linear Algebraic EquationsSachin C. PatwardhanDept. of Chemical Engineering,Indian Institute of Technology, BombayPowai, Mumbai, 400 076, Inda.Email: sachinp@iitb.ac.inContents1 Introduction32 Existence of Solutions33 Direct Solution Techniques3.1 Gaussian Elimination and LU Decomposition . . . . . . . . . . . . . . . . .3.2 Number Computations in Direct Methods . . . . . . . . . . . . . . . . . . .66124 Direct Methods for Solving Sparse Linear Systems4.1 Block Diagonal Matrices . . . . . . . . . . . . . . . . . .4.2 Thomas Algorithm for Tridiagonal and Block Tridiagonal4.3 Triangular and Block Triangular Matrices . . . . . . . .4.4 Solution of a Large System By Partitioning . . . . . . . .1314151718.181919202123235 Iterative Solution Techniques5.1 Iterative Algorithms . . . . . . . . . . . . . . .5.1.1 Jacobi-Method . . . . . . . . . . . . . .5.1.2 Gauss-Seidel Method . . . . . . . . . . .5.1.3 Relaxation Method . . . . . . . . . . . .5.2 Convergence Analysis of Iterative Methods [3, 2]5.2.1 Vector-Matrix Representation . . . . . .1. . . . . . .Matrices [2]. . . . . . . . . . . . .

5.2.25.2.3Iterative Scheme as a Linear Di erence Equation . . . . . . . . . . .Convergence Criteria for Iteration Schemes . . . . . . . . . . . . . . .24266 Optimization Based Methods6.1 Gradient Search Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6.2 Conjugate Gradient Method . . . . . . . . . . . . . . . . . . . . . . . . . . .3031327 Matrix Conditioning and Behavior of Solutions7.1 Motivation [3] . . . . . . . . . . . . . . . . . . . .7.2 Condition Number [3] . . . . . . . . . . . . . . . .7.2.1 Case: Perturbations in vector b [3] . . . .7.2.2 Case: Perturbation in matrix A[3] . . . . .7.2.3 Computations of condition number . . . .343537373839.8 Summary429 Appendix A: Behavior of Solutions of Linear Di erence Equations4210 Appendix B: Theorems on Convergence of Iterative Schemes4611 Appendix C: Steepest Descent / Gradient Search Method11.1 Gradient / Steepest Descent / Cauchy’s method . . . . . . . . . . . . . . . .11.2 Line Search: One Dimensional Optimization . . . . . . . . . . . . . . . . . .5151532

1IntroductionThe central problem of linear algebra is solution of linear equations of typea11 x1 a12 x2 ::::::::::: a1n xn b1(1)a21 x1 a22 x2 ::::::::::: a2n xn b2(2):::::::::::::::::::::::::::::::::: ::::(3)am1 x1 am2 x2 :::::::::: amn xn bm(4)which can be expressed in vector notation as follows(5)Ax b2a11 a126 a6 21 a22A 64 ::: :::am1 :::::: a1n::: a2n::: :::::: amn37775(6)where x 2 Rn ; b 2 Rm and A2 Rm Rn i.e. A is a m n matrix. Here, m representsnumber of equations while n represents number of unknowns. Three possible situations arisewhile developing mathematical modelsCase (m n) : system may have a unique solution / no solution / multiple solutionsdepending on rank of matrix A and vector b:Case (m n) : system either has no solution or has in nite solutions.Note that when there is no solution, it is possible to nd projection of b in the columnspace of A. This case was dealt with in the Lecture Notes on Problem Discretization byApproximation Theory. In these lecture notes, we are interested in the case when m n;particularly when the number of equations are large. A took-kit to solve equations of this typeis at the heart of the numerical analysis. Before we present the numerical methods to solveequation (5), we discuss the conditions under which the solutions exist. We then proceedto develop direct and iterative methods for solving large scale problems. We later discussnumerical conditioning of a matrix and its relation to errors that can arise in computingnumerical solutions.2Existence of SolutionsConsider the following system of equations"#" #" #1 2x2 22y13(7)

Figure 1: Linear algebraic equations: (a) Row viewpoint and (b) Column viewpointThere are two ways of interpreting the above matrix vector equation geometrically.Row viewpoint [3]: If we consider two equations separately as" #hi xx 2y 1 2 1y2x2y h22i"xy# 1(8)(9)then, each one is a line in x-y plane and solving this set of equations simultaneouslycan be interpreted as nding the point of their intersection (see Figure 1 (a)).Column viewpoint [3]: We can interpret the equation as linear combination of column vectors, i.e. as vector addition" #"#" #122x y (10)221Thus, the system of simultaneous equations can be looked upon as one vector equationi.e. addition or linear combination of two vectors (see Figure 1 (b)).4

Now consider the following two linear equationsx y 2(11)3x 3y 4(12)This is clearly an inconsistent case and has no solution as the row vectors are linearlydependent. In column picture, no scalar multiple ofv [1 3]Tcan found such thatv [2 4]T : Thus, in a singular caseRow viewpoint fails()Column viewpoint failsi.e. if two lines fail to meet in the row viewpoint then vector b cannot be expressed as linearcombination of column vectors in the column viewpoint [3].Now, consider a general system of linear equations Ax b where A is an n n matrix.Row viewpoint : Let A be represented as32Tr(1)6 (2) T 776 r7(13)A 66 :::: 754Tr(n)wherer(i)Trepresents i’th row of matrix A. Then Ax b can be written as n equations2T66664Tr(1) xTr(2) x::::(n) Trx327 67 67 67 45b1b2::::bn37775(14)Each of these equations r(i) x bi represents a hyperplane in Rn (i.e. line in R2 ,planein R3 and so on). Solution of Ax b is the point x at which all these hyperplanesintersect (if at all they intersect in one point).Column viewpoint : Let matrix A be represented asA [ c(1) c(2) :::::::::c(n) ]where c(i) represents ith column of A. Then we can look at Ax b as one vector equationx1 c(1) x2 c(2) :::::::::::::: xn c(n) b5(15)

Components of solution vector x tells us how to combine column vectors to obtain vectorb:In singular case, the n hyperplanes have no point in common or equivalently the n column vectors are not linearly independent. Thus, both these geometric interpretations areconsistent with each other.Matrix A operates on vector x 2 R(AT ) (i.e. a vector belonging to row space of A) itproduces a vector Ax 2 R(A) (i.e. a vector in column space of A). Thus, given a vector b; asolution x exists Ax b if only if b 2 R(A): The solution is unique if the columns of A arelinearly independent and the null space of A contains only the origin, i.e. N (A) f0g: Anon-zero null space is obtained only when columns of A are linearly dependent and the matrixA is not invertible. In such as case, we end up either with in nite number of solutions whenb 2 R(A) or with no solution when b 2 R(A):3Direct Solution TechniquesMethods for solving linear algebraic equations can be categorized as direct and iterativeschemes. There are several methods which directly solve equation (5). Prominent amongthese are such as Cramer’s rule, Gaussian elimination and QR factorization. As indicatedlater, the Cramer’s rule is unsuitable for computer implementation and is not discussed here.Among the direct methods, we only present the Gaussian elimination here in detail.3.1Gaussian Elimination and LU DecompositionThe Gaussian elimination is arguably the most used method for solving a set of linearalgebraic equations. It makes use of the fact that a solution of a special system of linearequations, namely the systems involving triangular matrices, can be constructed very easily.For example, consider a system of linear equation given by the following matrix equation23u11 u12 u13 ::: u1n676 0u22 u23 ::: u2n 767U 6(16)0u33 ::: u3n 76 0767::: :::: ::: ::: 54 00::: :0 u nnHere, U is a upper triangular matrix such that all elements below the main diagonal arezero and all the diagonal elements are non-zero, i.e. uii 6 0 for all i. To solve the systemUx , one can start from the last equationxn 6nunn(17)

and then proceed as followsxnxn12 1un1;n 1n 1un1;n xnn 2un2;n 1 xn 11un2;n 2In general, for i’th element xi ; we can write"1xi uii inXuij xjj i 1#un2;n xn(18)where i n 1; n 2; :::1: Since we proceed from i n to i 1; these set of calculationsare referred to as back substitution.Thus, the solution procedure for solving this system of equations involving a special typeof upper triangular matrix is particularly simple. However, the trouble is that most of theproblems encountered in real applications do not have such special form. Now, supposewe want to solve a system of equations Ax b where A is a general full rank squarematrix without any special form. Then, by some series of transformations, can we convertthe system from Ax b form to Ux form, i.e. to the desired special form? It isimportant to carry out this transformation such that the original system of equations andthe transformed system of equations have identical solutions, x.To begin with, let us note one important property of the system of linear algebraicequations under consideration. Let T represent an arbitrary square matrix with full rank,i.e. T is invertible. Then, the system of equations Ax b and (TA)x Tb have identicalsolutions. This follows from invertibility of matrix T: Thus, if we can nd a matrix T suchthat TA U; where U is of the form (16) and uii 6 0 for all i, then recovering the solutionx from the transformed system of equations is quite straight forward. Let us see how toconstruct such a transformation matrix systematically.Thus, the task at hand is to convert a general matrix A to the upper triangular formU:To understand the process of conversion, let is consider a speci c example where (A; b)are chosen as follows232 32112676 7A 4 4(19)1 2 5 and b 4 7 52 2 11To begin with, we would like to transform A to matrix2321167A(1) 4 052 2 17

such that element (2,1) of A(1) is zero. This can be achievedmatrix, E21 ; as follows23 210 01067 6E21 4 e21 1 0 5 4 2 100 100whereby constructing an elementary3070 51a21 2a11Here, the element a11 is called as pivot, or, to be precise, the rst pivot. Note that E21 is aninvertible matrix and its inverse can be constructed as follows2310 067(E21 ) 1 4 e21 1 0 500 1e21 Multiplying matrix A and vector b with E21 yields2 33222116 776(1)(1)A E21 A 4 01 4 5 and b E21 b 4 3 512 21and since the transformation matrix T E21 is invertible, the transformed system of equations (E21 A)x (E21 b) and the original system Ax b will have identical solutions.While the above transformation was achieved by multiplying both the sides of Ax b byT E21 ; identical resultcan beiachieved in practice if we multiply the rst row of the matrixhaugmented matrix A j b by e21 add it to its second row, i.e.(1)a2j a2je21 a1j(1)for j 1; 2; :::n and b2 b2 e21 b1 : This turns out to be much more e cient way of carryingout the transformation than doing the matrix multiplications.Next step is to transform A(1) to232 1167A(2) 4 01 4 50and this can be achieved by constructing3 2323 21001 0 010 0767 6 071 0 7 6E31 4 01 0 5 64 a315 4 0 1 0 50 1e31 0 11 0 1a118

and multiplying matrix A(1) and vector b(1) with E31 i.e.2A(2)b(2)32 1167 E31 A(1) (E31 E21 ) A 4 01 4 50 302 326 7(1) E31 b (E31 E21 ) b 4 3 53Note again that E31 is invertible and2(E31 )13 2310 010 067 67 4 01 0 5 4 01 0 5e31 0 11 0 1Thus, to begin with, we make all the elements in the rst column zero, except the rst one.To get an upper triangular form, we now have to eliminate element (3,2) of A(2) and thiscan be achieved by constructing another elementary matrix3232321 001 0 01 0076 0 10 7 67 667E32 4 0 10 5 67 4 0 1 0 5(2)54a32100 3 10e32 1(2)a22Transforming A(2) we obtain32 1176U A(3) E32 A(2) (E32 E31 E21 ) A 4 01 4 50 01223267(3)(1) b E31 b (E32 E31 E21 ) b 4 3 5122Note that matrix A(3) is exactly the upper triangular form U that we have been looking for.The invertible transformation matrix T that achieves this is given by2310 067T E32 E31 E21 4 2 1 0 55 3 1which is a lower triangular matrix. Once we have transformed Ax b to Ux is straight forward to compute the solutionx3 1; x2 1 and x1 19form, it

using the back substitution method.It is interesting to note that23 23 2310010010067 67 67T 1 E211 E311 E321 4 210 5 4 210 5 4 e21 10 513 113 1e31 e32 1i.e. the inverse of matrix T can be constructed simply by inserting eij at (i,j)’th position inan identity matrix. Since we have U TA, matrix A can be recovered from U by invertingthe transformation process as A T 1 U. Here, T 1 is a lower triangular matrix and it iscustomary to denote it using symbol L i.e. L T 1 and A L U. This is nothing but theLU decomposition of matrix A.Before we proceed with generalization of the above example, it is necessary to examine adegenerate situation that can arise in the process of Gaussian elimination. In the course oftransformation, we may end up with a scenario where the a zero appears in one of the pivotlocations. For example, consider the system of equations with slightly modi ed A matrixand b vector i.e.2 33222116 776(20)A 4 42 2 5 and b 4 8 512 2 1Multiplying withE21yieldsA(2)3231 0 00 07676 4 2 1 0 5 and E31 4 0 1 0 51 0 100 1212 3322 116 767(2) E31 E21 A 4 0 0 4 5 and b E31 E21 b 4 4 530 3 02(2)Note that zero appears in the pivot element, i.e. a22 0; and consequently the constructionof E32 is in trouble. This di culty can be alleviated if we exchange the last two rows ofmatrix A(2) and last two elements of b(2) , i.e.232 32 112677(3)(3) 6A 4 0 3 0 5 and b 4 3 50 0 44(3)This rearrangement renders the pivot element a22 6 0 and the solution becomes tractable.The transformation leading to exchange of rows can be achieved by constructing a permuta10

tion matrix P32P32231 0 067 4 0 0 1 50 1 0such that P32 A(2) A(3) : The permutation matrix P32 is obtained just by exchanging rows 2and 3 of the identity matrix. Note that the permutation matrices have strange characteristics.Not only these matrices are invertible, but these matrices are inverses of themselves! Thus,we have2323 231 0 01 0 01 0 06767 67(P32 )2 4 0 0 1 5 4 0 0 1 5 4 0 1 0 50 1 00 1 00 0 1which implies (P32 ) 1 P32 and, if you consider the fact that exchanging two rows of amatrix in succession will bring back the original matrix, then this result is obvious. Comingback to the construction of the transformation matrix for the linear system of equationsunder consideration, matrix T is now constructed as follows T P32 E31 E21 :This approach of constructing the transformation matrix T; demonstrated using a 3 3matrix A; can be easily generalized for a system of equations involving an n n matrix A.For example, elementary matrix Ei1 ; which reduces element (i; 1) in matrix A to zero, canbe constructed by simply inserting ei1 (ai1 a11 ) at (i; 1)0 th location in an n n identitymatrix, i.e.3210 0 ::: 0766 :::::: ::: ::: ::: 7767(21)Ej1 6e:::1:::0i17676::: :::: ::: ::: 54 00::: :::: 0 1while the permutation matrix Pij ; which interchanges i’th and j’th rows, can be created byinterchanging i’th and j’th rows of the n n identity matrix. The transformation matrixT for a general n n matrix A can then be constructed by multiplying the elementarymatrices, Eij ; and the permutation matrices, Pij ; in appropriate order such that TA U:It is important to note that the explanation presented in this subsection provides insightsinto the internal working of the Gaussian elimination process. While performing the Gaussianelimination on a particular matrix through a computer program, neither matrices (Eij ; Pij )nor matrix T is constructed explicitly. For example, reducing elements (i; 1) in the rstcolumn of matrix A to zero is achieved by performing the following set of computations(i)aij aijfor j 1; 2; :::nei1 a1j11

where i 1; 2; :::n: Performing these elimination calculations, which are carried out rowwise and may require row exchanges, is equivalent to constructing matrices (Eij ; Pij ) ande ectively matrix T; which is an invertible matrix. Once we have reduced Ax b toUx form, such that uii 6 0 for all i, then it is easy to recover the solution x usingthe back substitution method. Invertibility of matrix T guarantees that the solution of thetransformed problem is identical to that of the original problem.3.2Number Computations in Direct MethodsLet ' denote the number of divisions and multiplications required for generating solutionby a particular method. Various direct methods can be compared on the basis of '.Cramers Rule:1)(n 1)(n!) n n2 n!'(estimated) (n(22)For a problem of size n 100 we have ' 10162 and the time estimate for solvingthis problem on DEC1090 is approximately 10149 years [1].Gaussian Elimination and Backward Sweep: By maximal pivoting and rowoperations, we rst reduce the system (5) tobUx b(23)where U is a upper triangular matrix and then use backward sweep to solve (23) forx:For this scheme, we have' n3 3n23n n33(24)For n 100 we have ' 3:3 105 [1].LU-Decomposition: LU decomposition is used when equation (5) is to be solved forseveral di erent values of vector b, i.e.,Ax(k) b(k) ; k 1; 2; ::::::N(25)The sequence of computations is as followsA LU(Solved only once)(26)Ly(k) b(k) ; k 1; 2; ::::::N(27)Ux(k) y(k) ; k 1; 2; ::::::N(28)12

For N di erent b vectors,' n3n3 N n2(29)Gauss Jordon Elimination: In this case, we start with [A : I : b] and by sequencerow operations we reduce it to [I : A 1 : x] i.e.(k)A:I:bSequence ofI:A!Row Operations1: x(k)(30)For this scheme, we have' n3 (N21)n2(31)Thus, Cramer’s rule is certainly not suitable for numerical computations. The later threemethods require signi cantly smaller number of multiplication and division operations whencompared to Cramer’s rule and are best suited for computing numerical solutions moderatelylarge (n 1000) systems. When number of equations is signi cantly large (n 10000), eventhe Gaussian elimination and related methods can turn out to be computationally expensiveand we have to look for alternative schemes that can solve (5) in smaller number of steps.When matrices have some simple structure (few non-zero elements and large number ofzero elements), direct methods tailored to exploit sparsity and perform e cient numericalcomputations. Also, iterative methods give some hope as an approximate solution x can becalculated quickly using these techniques.4Direct Methods for Solving Sparse Linear SystemsA system of Linear equationsAx b(32)is called sparse if only a relatively small number of its matrix elements (aij ) are nonzero.The sparse patterns that frequently occur areTridiagonalBand diagonal with band width MBlock diagonal matricesLower / upper triangular and block lower / upper triangular matrices13

We have encountered numerous situations in the module on Problem Discretization usingApproximation Theory where such matrices arise. It is wasteful to apply general direct methods on these problems. Special methods are evolved for solving such sparse systems, whichcan achieve considerable reduction in the computation time and memory space requirements.In this section, some of the sparse matrix algorithms are discussed in detail. This is meantto be only a brief introduction to sparse matrix computations and the treatment of the topicis, by no means, exhaustive.4.1Block Diagonal MatricesIn some applications, such as solving ODE-BVP / PDE using orthogonal collocations on nite elements, we encounter equations with a block diagonal matrices, i.e.3 2 (1) 3 2 (1) 32bxA1 [0] :::: [0]6 [0] A :::: [0] 7 6 x(2) 7 6 b(2) 777 67 66277 67 664 ::::: ::::: ::::: 5 4 :::: 5 4 :::: 5[0][0] ::::b(m)x(m)Amwhere Ai for i 1; 2; ::m are ni ni sub-matrices and x(i) 2 Rni , b(i) 2 Rni for i 1; 2; ::mare sub-vectors. Such a system of equations can be solved by solving the following subproblemsAi x(i) b(i) f or i 1; 2; :::m(33)where each equation is solved using, say, Gaussian elimination. Each Gaussian eliminationsub-problem would requiren3 3n2i ni'i i3andmX'(Block Diagonal) 'ii 1PmDe ning dimension of vector x as n i 1 ni ; the number of multiplications and divisionsin the conventional Gaussian elimination equals'(Conventional) n3 3n23nIt can be easily shown thatmXn3 3n2ii 1i.e.i3niPPm32( mi 1 ni ) 3 (i 1 ni ) 3'(Block Diagonal) '(Conventional)14Pmi 1ni

4.2Thomas Algorithm for Tridiagonal and Block Tridiagonal Matrices [2]Consider system of equation2b1 c 1 066 a2 b 2 c 266 0 a3 b 366 0 0 a6466 ::: ::: ::66 ::: ::: :::664 ::: ::: :::0 0 0given by following equation32::: ::: ::::::0760 ::: ::::::0767676c3 ::: ::::::0766b4 c4 : ::::::::: 776766::: ::: ::::::::: 77676::: ::: :::cn 2 07676::: ::: an 1 bn 1 cn 1 5 40 ::: :0anbnx1x2x3::::::::::::::::xn3276767676767676 77775(34)where matrix A is a tridiagonal matrix. Thomas algorithm is the Gaussian eliminationalgorithm tailored to solve this type of sparse system.Step 1:Triangularization: Forward sweep with normalization1k bkckak (dk(bkc1b1(35)f or k 2; 3; ::::(n1)(36)k 11k akak(37) d1 b1k 1)k 1)f or k 2; 3; ::::n(38)This sequence of operations nally results in the following system of equations23 23 2310 :::: 0x11167 67 676 0 17 6 x2 7 6 2 7:::: 0267 67 676 ::: 0 1 :::: :7 6 :::: 7 6 : 767 67 6767 67 674 :::: :::: :::: ::::4 : 5n 1 5 4 :::: 50 0 ::: :::: 1xnnStep 2: Backward sweep leads to solution vectorxn xk (39)k xk 1kk (nn1); (n152); ::::::; 1(40)

Total no of multiplications and divisions in Thomas algorithm is' 5n8which is signi cantly smaller than the n3 3 operations (approximately) necessary forcarrying out the Gaussian elimination and backward sweep for a dense matrix.The Thomas algorithm can be easily extended to solve a system of equations that involvesa block tridiagonal matrix. Consider block tridiagonal system of the form2323 23d(1)B1 C1 [0]: ::::: ::::[0]:x(1)6767 676 A2 B2 C2 :::: :::::::::: 7 6 x(2) 7 6 d(2) 76767 676 :::: ::::: ::::: ::::: :::::76 :7 6 :7(41)[0]:6767 676767 674 ::::: ::::: ::::: ::::: :Bn 1 Cn 1 5 4 :5 4 :5(n)(n)[0] ::::: ::::: [0] AnBnxdwhere Ai ; Bi and Ci are matrices and (x(i) , d(i) ) represent vectors of appropriate dimensions.Step 1:Block Triangularization1k [BkAkk 1](1)(k) [BkAkk 1]1 [B1 ]11C1Ck f or k 2; 3; ::::(n [B1 ](d(k)11)d(1)(k 1)Ak(42)(43)) f or k 2; 3; ::::nStep 2: Backward sweepx(n) x(k) (k)for k (n16(n)kx1); (n(44)(k 1)2); ::::::; 1(45)(46)

4.3Triangular and Block Triangular MatricesA triangular matrix is a sparse matrix with zero-valued elements above or below the diagonal.For example, a lower triangular matrix can be represented as follows23l11 0 : : 0676 l12 l22 : : 0 7677L 6:: : :6 :767:: : :4 :5l n1 :: : l nnTo solve a system Lx b, the following algorithm is usedx1 "1xi biliii 1Xlij xjj 1#b1l11(47)f or i 2; 3; :::::n(48)The operational count ' i.e., the number of multiplications and divisions, for this eliminationprocess isn(n 1)' (49)2which is considerably smaller than the Gaussian elimination for a dense matrix.In some applications we encounter equations with a block triangular matrices. For example,3 2 (1) 3 2 (1) 32bxA1;1 [0]:::: [0]76766 A(2) 77 6 b(2) 77 6 x6 1;2 A2;2 :::: [0]77 67 664 :::::::::: ::::: 5 4 :::: 5 4 :::: 5Am;1 Am;2 ::::Am;mx(m)b(m)where Ai;j are ni ni sub-matrices while x(i) 2 Rni and b(i) 2 Rni are sub-vectors fori 1; 2; ::m. The solution of this type of systems is completely analogous to that of lowertriangular systems,except that sub-matrices and sub-vectors are used in place of scalars.Thus, the equivalent algorithm for block-triangular matrix can be stated as follows(1)x(i) (Aii ) 1 [b(i) (A11 ) 1 b(1)i 1X(Aij x(i) )]f or i 2; 3; :::::m(50)(51)j 1The above form does not imply that the inverse (Aii ) 1 should be computed explicitly. Forexample, we can nd each x(i) by Gaussian elimination.17

4.4Solution of a Large System By PartitioningIf matrix A is equation (5) is very large, then we can partition matrix A and vector b asAx where A11 is a (m"A11 A12A21 A22#"(1)xx(2)# "(1)bb(2)#m) square matrix. this results in two equationsA11 x(1) A12 x(2) b(1)(52)A21 x(1) A22 x(2) b(2)(53)which can be solved sequentially as follows1x(1) [A11 ]A21 [A11 ]A221[b(1)A21 [A11 ]x(2) A221[b(1)A12 x(2) ]A12 x(2) ] A22 x(2) b(2)A12 x(2) b(2)A21 [A11 ]1A121b(2)(A21 A111 )b(1)(54)(55)(56)(A21 A111 )b(1)It is also possible to work with higher number of partitions equal to say 9, 16 . and solvea given large dimensional problem.5Iterative Solution TechniquesBy this approach, we start with some initial guess solution, say x(0) ; for solution x andgenerate an improved solution estimate x(k 1) from the previous approximation x(k) : Thismethod is a very e ective for solving di erential equations, integral equations and relatedproblems [4]. Let the residue vector r be de ned as(k)ri binX(k)aij xjfor i 1; 2; :::n(57)j 1i.e. r(k) bAx(k) : The iteration sequence x(k) : k 0; 1; ::: is terminated when somenorm of the residue r(k) Ax(k)b becomes su ciently small, i.e.r(k) "kbk18(58)

where " is an arbitrarily small number (such as 10 8 or 10criterion can bex(k) x(k 1) "kx(k 1) k10). Another possible termination(59)It may be noted that the later condition is practically equivalent to the previous terminationcondition.A simple way to form an iterative scheme is Richardson iterations [4]x(k 1) (IA)x(k) b(60)or Richardson iterations preconditioned with approximate inversionx(k 1) (IMA)x(k) Mb(61)where matrix M is called approximate inverse of A if kI MAk 1: A question thatnaturally arises is ’will the iterations converge to the solution of Ax b? ’. In this section,to begin with, some well known iterative schemes are presented. Their convergence analysisis presented next. In the derivations that follow, it is implicitly assumed that the diagonalelements of matrix A are non-zero, i.e. aii 6 0: If this is not the case, simple row exchangeis often su cient to satisfy this condition.5.15.1.1Iterative AlgorithmsJacobi-MethodSuppose we have a guess solution, say x(k) ;hiT(k)(k)x(k) x(k)x2 :::: xn1for Ax b: To generate an improved estimate starting from x(k) ;consider the rst equationin the set of equations Ax b, i.e.,(62)a11 x1 a12 x2 ::::::::: a1n xn b1(k 1)Rearranging this equation, we can arrive at a iterative formula for computing x1i1 h(k 1)(k)(k) b1 a12 x2 ::::::: a1n xnx1a11Similarly, using second equation from Ax b, we can derivei1 h(k 1)(k)(k)x2 b2 a21 x1a23 x3 :::::::: a2n x(k)na2219, as(63)(64)

Table 1: Algorithm for Jacobi IterationsINITIALIZE : b; A; x(0) ; kmax ; "k 0 100 "WHILE [( ") AN D (k kmax )]FOR i 1 : nnPri biaij xjj 1xN i xi riaiiEND FORkrk kbk xNk k 1END WHILEIn general, using ith row of Ax b; we can generate improved guess for the i’th element xof as followsi1 h(k 1)(k)(k)(k)(k)xi b2 ai1 x1 :::::: ai;i 1 xi 1 ai;i 1 xi 1 ::::: ai;n xn(65)aiiThe above equation can also be rearranged as follows(k)(k 1)xi(k) xi riaii!(k)where ri is de ned by equation (57). The algorithm for implementing the Jacobi iterationscheme is summarized in Table 1.5.1.2Gauss-Seidel MethodWhen matrix A is large, there is a practical di culty with the Jacobi method. It is required to store all components of x(k) in the computer memory (as a separate variables) untilcalculations of x(k 1) is over. The Gauss-Seidel method overcomes this di culty by using(k 1)(k 1)xiimmediately in the next equation while computing xi 1 :This modi cation leads tothe following set of equationsi1 h(k 1)(k)(k)x1 b1 a12 x2a13 x3 ::::::a1n x(k)(66)na1120

Table 2: Algorithm for Gauss-Seidel IterationsINITIALIZE :b; A; x; kmax ; "k 0 100 "WHILE [( ") AN D (k kmax )]FOR i 1 : nnPri biaij xjj 1xi xi (ri aii )END FOR krk kbkk k 1END WHILEno noi1 h(k 1)(k)(k) b2a21 x1a23 x3 ::::: a2n xna22no noi1 h(k 1)(k 1)(k)(k 1)b3a31 x1 a32 x2a34 x4 ::::: a3n x(k)x3 na33In general, for i’th element of x, we have#"ni 1XX1(k)(k 1)(k 1)aij xj biaij xjxiaiij i 1j 1(k 1)x2(67)(68)To simplify programming, the above equation can be rearranged as follows(k 1)xiwhere(k)ri" bi(k)(k)(69) xi ri aiii 1X(k 1)aij xjj 1nXj i(k)aij xj#The algorithm for implementing Gauss-Siedel iteration scheme is summarized in Table 2.5.1.3Relaxation MethodSuppose we have a starting value say y, of a quantity and we wish to approach a targetvalue, say y ; by some method. Let application of the method change the value from y to yb.If yb is between y and ye; which is even closer to y than yb, then we can approach y faster by21

Table 3: Algorithms for Over-Relaxation IterationsINITIALIZE : b; A; x; kmax ; "; !k 0 100 "WHILE [( ") AN D (k kmax )]FOR i 1 : nnPri biaij xjj 1zi xi (ri aii )xi xi !(zi xi )END FORr b Ax krk kbkk k 1END WHILEmagnifying the change (byfactor ! 1 and gety) [3]. In order to achieve this, we need to apply a magnifyingye y ! (by(70)y)This ampli cation process is an extrapolation and is an example of over-relaxation. If theintermediate value yb tends to overshoot target y , then we may have to use ! 1 ; this iscalled under-relaxation.Application of over-relaxation to Gauss-Seidel method leads to the following set ofequations(k 1)xi(k 1)(k) xi ![zi(k)xi ](71)i 1; 2; :::::n(k 1)where ziare generated using the Gauss-Seidel method, i.e.,"#i 1nXX1(k 1)(k)(k 1)zi biaij xjaij xjaiij 1j i 1(72)i 1; 2; :::::nThe steps in the implementation of the over-relaxation iteration scheme are summarized inTable 3. It may be noted that ! is a tuning parameter, which is chosen such that 1 ! 2:22

5.25.2.1Convergence Analysis of Iterative Methods [3, 2]Vector-Matrix RepresentationWhen Ax b is to be solved iteratively, a question that naturally arises is ’under whatconditions the iterations conver

Figure 1: Linear algebraic equations: (a) Row viewpoint and (b) Column viewpoint There are two ways of interpreting the above matrix vector equation geometrically. Row viewpoint [3]: If we consider two equations separately as x 2y h 1 2 i " x y # 1 (8) 2x 2y h 2 2 i " x y # 1 (9) then, each one is a li

Related Documents:

Teacher’s Book B LEVEL - English in school 6 Contents Prologue 8 Test paper answers 10 Practice Test 1 11 Module 1 11 Module 2 12 Module 3 15 Practice Test 2 16 Module 1 16 Module 2 17 Module 3 20 Practice Test 3 21 Module 1 21 Module 2 22 Module 3 25 Practice Test 4 26 Module 1 26 Module 2 27 Module 3 30 Practice Test 5 31 Module 1 31 Module .

WinDbg Commands . 0:000 k . Module!FunctionD Module!FunctionC 130 Module!FunctionB 220 Module!FunctionA 110 . User Stack for TID 102. Module!FunctionA Module!FunctionB Module!FunctionC Saves return address Module!FunctionA 110 Saves return address Module!FunctionB 220 Module!FunctionD Saves return address Module!FunctionC 130 Resumes from address

XBEE PRO S2C Wire XBEE Base Board (AADD) XBEE PRO S2C U.FL XBEE Pro S1 Wire RF & TRANSRECEIVER MODULE XBEE MODULE 2. SIM800A/800 Module SIM800C Module SIM868 Module SIM808 Module SIM7600EI MODULE SIM7600CE-L Module SIM7600I Module SIM800L With ESP32 Wrover B M590 MODULE GSM Card SIM800A LM2576

“numerical analysis” title in a later edition [171]. The origins of the part of mathematics we now call analysis were all numerical, so for millennia the name “numerical analysis” would have been redundant. But analysis later developed conceptual (non-numerical) paradigms, and it became useful to specify the different areas by names.

9.1 Properties of Radicals 9.2 Solving Quadratic Equations by Graphing 9.3 Solving Quadratic Equations Using Square Roots 9.4 Solving Quadratic Equations by Completing the Square 9.5 Solving Quadratic Equations Using the Quadratic Formula 9.6 Solving Nonlinear Systems of Equations 9 Solving Quadratic Equations

Lesson 2a. Solving Quadratic Equations by Extracting Square Roots Lesson 2b. Solving Quadratic Equations by Factoring Lesson 2c. Solving Quadratic Equations by Completing the Square Lesson 2d. Solving Quadratic Equations by Using the Quadratic Formula What I Know This part will assess your prior knowledge of solving quadratic equations

Approaches to Language Teaching: Foundations Module 1: Contextualizing Language Module 2: Building Language Awareness Module 3: Integrating Skills Module 4: Pairwork / Groupwork Module 5: Learner Feedback Approaches to Language Teaching: Extension Module 6: Managing Large Classes Module 7: Learning Strategies Module 8: Authentic Materials Module

Getting to know Cerebral Palsy: List of Modules: Module 1: Introduction Module 2: Evaluating Your child Module 3: Positioning Your child Module 4: Communication Module 5: Everyday Activities Module 6: Feeding Your child Module 7: Play Getting to know cerebral palsy V1 - Module 5: Everyday activities Page 4 MODULE 5 EVERYDAY ACTIVITIES