Methods To Invert A Matrix - Cleveland State University

2y ago
61 Views
6 Downloads
400.66 KB
55 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Abram Andresen
Transcription

Lecture 3: Determinants & Matrix InversionMethods to Invert a MatrixThe approaches available to find the inverse of a matrix are extensive and diverse. All methods seek tosolve a linear system of equations that can be expressed in a matrix format as[A] {x} {b}for the unknowns contained in the vector {x}, i.e.,{x} [A] 1 {b}The methods used to accomplish this can be loosely grouped into the following three categories: methods that explicitlypy calculate {{x}} methods that implicitly calculate {x}, and iterative methods that calculate {x}Of course hybridymethods exist that are combinations of two or methods in the categoriesglisted above.

Lecture 3: Determinants & Matrix InversionConsider the following list of methods which is not comprehensive:1. Explicitpmethods for sparsepmatrices – includes Cramer’s rule which is a specificpcase of usinggself adjoint matrices. Variations include:a. Gauss elimination- Row echelon form; all entries below a nonzero entry in the matrix are zero- Bareiss algorithm;geveryy element computedpis the determinant of a [A][ ]- Tri-diagonal matrix algorithm; special form of Gauss eliminationb. Gauss-Jordan elimination2. LDU decomposition – an implicit method that factors [A] into a product of a lower and uppertriangulargmatrices and a diagonalgmatrix. Variations includea. LU reduction – a special parallelized version of a LDU decomposition algorithm- Crout matrix decomposition is a special type of LU decomposition3. Cholesky LDL decomposition – an implicit method that decomposes [A], when it is a positivedefinite matrix, into the pproduct of a lower triangulargmatrix, a diagonalgmatrix and the conjugatej gtransposea. Frontal solvers used in finite element methodsb. Nested dissection – for symmetric matrices, based on graph partitioningc. Minimum degree algorithmd. Symbolic Cholesky decomposition

Lecture 3: Determinants & Matrix Inversion5. Iterative methods:a. Gauss-Seidel methods Successive over relaxation (SOR)() Back fit algorithmsb. Conjugate gradient methods (CG) – used often in optimization problems Nonlinear conjugate gradient method Biconjugatej g gradientgmethod (BiCG)() Biconjugate gradient stabilized method (BiCGSTAB) Conjugate residual methodc. Jacobi methodd. Modified Richardson iteratione. Generalized minimal residual method (GMRES) – based on the Arnoldi iterationf. Chebyshev iteration – avoids inner products but needs bounds on the spectrumg. Stone's method (SIP: Strongly Implicit Procedure) – uses an incomplete LU decompositionh. Kaczmarz methodi. Iterative refinement – procedure to turn an inaccurate solution in a more accurate one6. Levinson recursion – for Toeplitz matrices7. SPIKE algorithm – hybrid parallel solver for narrow-banded matricesDetails and derivations are presented for several of the methods in this section of the notes. Conceptsassociated with determinants, cofactors and minors are presented first.

Lecture 3: Determinants & Matrix InversionThe Determinant of a Square MatrixA square matrix of order n (an n x n matrix), i.e.,[ A] a11 a 21 M an1a12a22Man 2K a1n K a2 n O M L ann possesses a uniquelyi l definedd fi d scalarl thath isi designatedd id as theh determinantdioff theh matrix,i ormerely the determinantdet [ A] AObserve that only square matrices possess determinants.

Lecture 3: Determinants & Matrix InversionVertical lines and not brackets designate a determinant, and while det[A] is a number andhas no elements, it is customary to represent it as an array of elements of the matrixdet [A] a11a12a21a22Man1Man 2K a1nK a2 nO ML annA general procedure for finding the value of a determinant sometimes is called “expansionexpansionby minors.” We will discuss this method after going over some ground rules for operatingwith determinants.

Lecture 3: Determinants & Matrix InversionRules for Operating with DeterminantsRules ppertainingg to the manipulationpof determinants are ppresented in this section withoutformal proof. Their validity is demonstrated through examples presented at the end of thesection.Rule #1: Interchangingg g anyy row (or( column)) of a determinant with its immediate adjacentjrow (or column) flips the sign of the determinant.Rule #2: The multiplication of any single row (column) of determinant by a scalar constantis equivalent to the multiplication of the determinant by the scalar.Rule #3: If any two rows (columns) of a determinant are identical, the value of thedeterminant is zero and the matrix from which the determinant is derived is said to besingular.Rule #4: If any row (column) of a determinant contains nothing but zeroes then the matrixfrom which the determinant is derived is singular.Rule #5: If any two rows (two columns) of a determinant are proportional,proportional i.e.,i e the tworows (two columns) are linearly dependent, then the determinant is zero and the matrix fromwhich the determinant is derived is singular.

Lecture 3: Determinants & Matrix InversionRule #6: If the elements of any row (column) of a determinant are added to or subtractedfrom the corresponding elements of another row (column) the value of the determinant isunchanged.unchangedRule #6a: If the elements of any row (column) of a determinant are multiplied by a constantand then added or subtracted from the corresponding elements of another row (column), thevalue of the determinant is unchanged.unchangedRule #7: The value of the determinant of a diagonal matrix is equal to the product of theterms on the diagonal.Rule #8: The value for the determinant of a matrix is equal to the value of the determinant ofthe transpose of the matrix.Rule #9: The determinant of the product of two matrices is equal to the product of theddeterminantsioff theh two matrices.iRule #10: If the determinant of the product of two square matrices is zero, then at least oneof the two matrices is singular.Rule #11: If an m x n rectangular matrix A is post-multiplied by an n x m rectangular matrixB, the resulting square matrix [C] [A][B] of order m will, in general, be singular if m n.

Lecture 3: Determinants & Matrix InversionIn Class Example

Lecture 3: Determinants & Matrix InversionMinors and CofactorsConsider the nth order determinant:det [A] a11a21a12a22MMan1K a1nK a2 nOMan 2 L annThe mth order minor of the nth order matrix is the determinant formed by deleting ( n – m )rows andd ( n – m ) columnslini theh nth orderd determinant.diForF examplel theh minori M ir off thehdeterminant A is formed by deleting the ith row and the rth column. Because A is an nthorder determinant, the minor M ir is of order m n – 1 and contains m2 elements.IIn general,l a minorifformedd bby ddeletingl ti p rows andd p columnslini theth nth orderedd d ddeterminantti t A is an (n – p)th order minor. If p n – 1, the minor is of first order and contains only asingle element from A .From thiFthis it isi easy tot see thatth t theth determinantd ti t A containst i n2 elementslt off firstfi t orderd minors,ieach containing a single element.

Lecture 3: Determinants & Matrix InversionWhen dealing with minors other than the (n – 1)th order, the designation of the eliminatedrows and columns of the determinant A must be considered carefully.y It is best to considerconsecutive rows j, k, l, m and consecutive columns r, s, t, u so that the (n – 1)th,(n – 2)th, and (n – 3)th order minors would be designated, respectively, as M j,r, M jk,rs and M jkl,rst.The complementary minor, or the complement of the minor, is designated as N (withsubscripts). This minor is the determinant formed by placing the elements that lie at theintersections of the deleted rows and columns of the original determinant into a square arrayin the same order that they appear in the original determinant.determinant For example,example given thedeterminant from the previous page, thenNN2323, 31 a23 a21a23a31a33

Lecture 3: Determinants & Matrix InversionThe algebraic complement of the minor M is the “signed” complementary minor. If aminor is obtained by deleting rows i, k, l and columns r, s, t from the determinant A theminor is designatedMikl , rstthe complementary minor is designatedN ikl ,rstand the algebraic complement is designated( 1)i k l L r s tN ikl ,rstThe cofactor, designated with capital letters and subscripts, is the signed (n – 1)thh minorformed from the nth order determinant. Suppose the that the (n – 1)th order minor is formedby deleting the ith row and jth column from the determinant A . Then corresponding cofactorisAij ( 1)i jMij

Lecture 3: Determinants & Matrix InversionObserve the cofactor has no meaning for minors with orders smaller than (n – 1) unless theminor itself is being treated as a determinant of order one less than the determinant A fromwhich it was derived.Also observe that when the minor is order (n – 1), the product of the cofactor and thecomplement is equal to the product of the minor and the algebraic complement.We can assembleWbl theth cofactorsf t off a square matrixt i off orderd n (an( n x n matrix)t i ) intoi t a squarecofactor matrix, i.e.,[ A]C A11 A 21 M An1A12A22MAn 2A1n K A2 n O M L Ann KSo when the elements of a matrix are denoted with capital letters the matrix represents amatrix of cofactors for another matrix.

Lecture 3: Determinants & Matrix InversionIn Class Example

Lecture 3: Determinants & Matrix InversionRules for Operations with CofactorsThe determinant for a three by three matrix can be computed via the expansion of the matrixb minors as follows:byfollo s:det [A] a11a12a13a21a22a23a31a32a33 a11a22a23a32a33 a21a12a13a32a33 a31a12a13a22a23This can be confirmed using the classic expansion technique for 3 x 3 determinants. Thisexpression can be rewritten as:det [A] a11a12a13a21a22a23a31a32a33 a11 M 11 a21 M21 a31 Mor using cofactor notation:det [A] A a11 A11 a21 A21 a31 A3131

Lecture 3: Determinants & Matrix InversionRule #12: A determinant may be evaluated by summing the products of every element in anyrow or column by the respective cofactor. This is known as Laplace’s expansion.Rule #13: If all cofactors in a row or a column are zero, the determinant is zero and matrixfrom which they are derived is singular.Rule #14: If the elements in a row or a column of a determinant are multiplied by cofactorsof the corresponding elements of a different row or column, the resulting sum of theseproducts are zero.

Lecture 3: Determinants & Matrix InversionThe Adjoint MatrixThe adjoint matrix is the matrix of transposed cofactors. If we have an nth order matrix[ A] a11 a12 aa2221 MM an1 an 2K a1n K a2 n O M K ann this matrix possess the following matrix of cofactors[A]C A11 A 21 M An1A12 K A1n A22 K A2 n MO M An 2 K Ann

Lecture 3: Determinants & Matrix Inversionand the adjoint of the matrix is defined as the transpose of the cofactor matrixadj [ A] [[A] ]C T A11 A 12 M A1nA21 K An1 A22 K An 2 MO M A2 n K Ann We will show in the next section that finding the inverse of a square matrix can beaccomplished with the following expression:[A] 1 adj [ A]AFor a 3 x 3 matrix this is known as Cramer’sCramer s rulerule.

Lecture 3: Determinants & Matrix InversionDirect Inversion MethodSuppose an n x n matrix is post multiplied by its adjoint and the resulting n x n matrix isidentified as [P][ P] a11 a12 K a1n A11 a AaKa21222n 12 MMM M M aaKan2nn A1n n1A21 K An1 A22 K An 2 MM M A2 n K Ann The elementsThlt off matrixt i [P] are divideddi id d iintot twot categories,ti i.e.,i elementslt thatth t lieli alonglthethdiagonalp11 a11 A11 a12 A12p22 a21 A21 a22 A22MpnnM an1 An1M an 2 An 2 K a1n A1n K a2 n A2 nKM K ann Ann

Lecture 3: Determinants & Matrix Inversionand those that do notp12 a11 A21 a12 A22 K a1n A2 np13 a11 A31 a12 A32 K a1n A3nMMp21 a21 A11Mp32M a31 A21Mpn 3M an1 A31MMM a22 A12KM K a2 n A1nMKM K a3 n A2 nMKM K ann A3n a32 A22 an 2 A32MKMThe elements of [P] that lie on the diagonal are all equal to the determinant of [A] (see Rule#12 and recognize the Laplace expansion for each diagonal value). Note that the nondiagonal elements will be equal to zero since they involve the expansion of one row ofmatrix A with the cofactors of an entirely different row (see Rule #14).

Lecture 3: Determinants & Matrix InversionThusp11 a11 A11 a12 A12p22 a21 A21 a22 A22MMM K a1n A1n K a2 n A2 nK A AAMpnn an1 An1 an 2 An 2 K ann Ann p12 a11 A21 a12 A22 K a1n A2 n 0p13 a11 A31 a12 A32 K a1n A3n 0andMMp21 a21 A11Mp32M a31 A21Mpn 3M an1 A31MMM a22 A12KM K a2 n A1n 0MKM K a3 n A2 n 0MKM K ann A3n 0 a32 A22 an 2 A32MKM

Lecture 3: Determinants & Matrix Inversionwhich leads to[P ] A 0 M 0[A] adj [A]or[I ] K 0 A K 0 MM 0 K A 0 A [I ][A] adj [A]AWhen this expression is compared to[I ][A] [A] 1 then it is evident that[A] 1 adj [A]AThe inverse exists only when the determinant of A is not zero, i.e., when A is not singular.

Lecture 3: Determinants & Matrix InversionIn Class Example

Lecture 3: Determinants & Matrix InversionThe direct inversion method presented above is referred to as a “brute force” approach.From a compcomputationaltational standpoint the method is inefficient (but(b t doable) whenhen the matrixmatri isquite large. There are more efficient methods for solving large systems of linear equationsthat do not involve finding the inverse.Generally these approaches are divided into the following two categories: Direct Elimination (not inversion) Methods (LDU decomposition, Gausselimination, Cholesky) Iterative Methods (Gauss-Seidel, Jacobi)We will look at methods from both categories.

Lecture 3: Determinants & Matrix InversionDirect Elimination MethodsElimination methods factor the matrix [A] into products of triangular and diagonal matricesmatrices,i.e., the matrix can be expressed as[ A] [L ] [D ] [U ]Where [L] and [U] are lower and upper triangular matrices with all diagonal entries equal to“1”. The matrix [D] is a diagonal matrix.Variations of this decomposition are obtained if the matric [D] is associated with either thematrix [L] or the matrix [U], i.e.,[A] [L ] [U ]where [[L]] and [[U]] in this last expressionpare not necessarilyy the same as the matricesidentified in the previous expression.

Lecture 3: Determinants & Matrix InversionIn an expanded format[ A] a11 a12 aa2221 MM an1 an 2K a1n l11 lK a2 n 21 MO M K ann ln10l22Mln 20 u11 u12K 0 u 22O M MM K lnn KK u1n K u 2 n O M K u nn The matricesThti[L] andd [U] ini thisthi ddecompositioniti are nott unique.iDifferencesDiffini theth manyvariations of elimination methods are simply differences in how these two matrices areconstructed.In solving a system of linear equations we can now write[A] {x}as[A] {x} {b}[L ][U ] {x} {b}

Lecture 3: Determinants & Matrix InversionIf we let[U ]{ x}then[L ][U ] {x} {y}[L ] {y} {b}which is an easier computation.computation Solving this last expression for each yi can beaccomplished with the following expressionbi yii 1 lj 1ijyji 1, 2, L , nliiWith the vector {y} known, the vector of unknowns {x} are computed fromyi xi n uj i 1liiijxji n, L , 1The process for solving for the unknown vector quantities {x} can be completed withoutcomputing the inverse of [A].

Lecture 3: Determinants & Matrix InversionThe Gauss Direct Elimination MethodThe Gauss elimination method begins with a forward elimination process and findsunknowns by backward substitutionsubstitution.Here the decomposition of the n x n matrix [A][ A][L ] [U ] is accomplished as follows. For each value of i (where i ranges from 1 to n) computeandl jiliju ji uii 1 aij j 1, L , i 1l jji 1 uk 1kil jkj 1, L , nIf at any stage in the elimination process the coefficient of the first equation, i.e., ajj(often referred to as the pivot point) or ljj becomes zero the method fails.

Lecture 3: Determinants & Matrix InversionIn Class Example

Lecture 3: Determinants & Matrix InversionCholesky’s Decomposition – A Direct Elimination MethodIn linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of aHermitian,i i positive-definitei i d fi i matrixi intoitheh productd off a lowerltriangularil matrixi andd itsiconjugate transpose. It was discovered by André-Louis Cholesky for real matrices (asopposed to matrices with elements that are complex). When it is applicable, the Choleskydecomposition is roughly twice as efficient as the LU decomposition for solving systems oflinear equations.The distinguishing feature of the Cholesky decomposition is that the matrix [A], which issymmetricalt i l andd positive-definite,iti d fi it can beb decomposeddd intoi t upper andd lowerltriangulartilmatrices that are the transpose of each other, i.e.,[ A] [L ] [L ]TThis can be loosely thought of as the matrix equivalent of taking the square root. Note that[A] is a positive definite matrix if for all non-zero vectors {z} the inner product{z}T [A] {z} 0is always greater than zero. This is guaranteed if all the eigenvalues of the matrix are positive.

Lecture 3: Determinants & Matrix InversionOnce the decomposition has been performed, the solution of the system of equationsproceeds by forward and backwards substitution in the same manner as the Gausselimination method.methodConvenience recurrence relationships for the Cholesky decomposition are as follows foreach successive column (ith index)liil ji aii a ji i 1 (l )2ikk 1i 1 lk 1jklikliij i 1, L , nThesehexpressionsican beb modifieddifi d whereh theh expressionsiare freef off needingdi to takek asquare root if the previous matrix expression is factored such that[ A]where again [D] is a diagonal matrix. [L] [D ] [L]T

Lecture 3: Determinants & Matrix InversionRecurrence relationships for the Cholesky LDL decomposition. They are expressed asfollows for each successive column (ith index)d ii aii lii 1l ji a ji i 1 d (l )k 1i 12kk dk 1ikl likkk jkd iij i 1, L , nWith [A] decomposed into a triple matrix product the solution to the system of equationspproceeds with{b} [A] {x}[L ][D ][L ]T {x}[L ] {y}

Lecture 3: Determinants & Matrix InversionAgainbi yii 1 lj 1ijyji 1, 2, L , nliibut now (verify for homework)yi xi n dj i 1liil xjik kji n, L , 1

Lecture 3: Determinants & Matrix InversionIn Class Example

Lecture 3: Determinants & Matrix InversionGauss-Seidel : An Iterative MethodFor large systems of equations the explicit and implicit elimination methods (backwards andforwards) can be inadequate from a memory storage perspective, execution time and/or roundoff error issues. Iterative solution strategies are used when these problems arise. Solutionsfound using iterative methods can be obtained with a foreknowledge of the error toleranceassociated with the method.The most common iterative scheme is the Gauss-Seidel method. Here an initial estimate ismade for the solution vector {x}. Subsequent to this guess each equation in the system oflilinearequationstiisi usedd tot solvel forf (update)( d t ) one off theth unknownskfromfa previousicalculation.l l tiFor a linear system of three equations in three unknowns the updating procedure can beexpressed by the following expressionsmmmx1x2x3 b1 b2 b3 (a12 ) ( m 1 x2 ) a11 (a21 ) ( m x1 ) (a23 ) ( m 1 x 3 ) (a32 ) ( m x 2 )a22 (a31 ) ( m x1 )a33(a13 ) ( m 1 x 3 )Note that m and (m-1) representthe current and previousiteration respectively.

Lecture 3: Determinants & Matrix InversionIn general the Gauss-Seidel iteration format can be expressed as( x)mi 1aii bi (aik )(i 1k 1mxk) m 1()()ax ikk nk i 1 The original equations must be of a form where the diagonal elements of [A] containonlyl nonzero values.lForF a stablet bl structure,t tor a componentt withith well-definedll d fi d boundarybdconditions this wil

- Crout matrix decomposition is a special type of LU decomposition 3. Cholesky LDL decomposition – an implicit method that decomposes [A], when it is a positive-definite matrix, into the ppgg jgroduct of a lower triangular matrix, a diagonal matrix and the conjugate trans

Related Documents:

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The square root of a matrix (if unique), not elementwise

A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not .

CONTENTS CONTENTS Notation and Nomenclature A Matrix Aij Matrix indexed for some purpose Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not elementwise

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The sq

Further Maths Matrix Summary 1 Further Maths Matrix Summary A matrix is a rectangular array of numbers arranged in rows and columns. The numbers in a matrix are called the elements of the matrix. The order of a matrix is the number of rows and columns in the matrix. Example 1 [is a ] 3 by 2 or matrix as it has 3 rows and 2 columns. Matrices are .

Diagonalization of a PH matrix Proposition : Let H(z) be a PH matrix, does there exist U(z) a PU matrix such that : Ue(z)H(z)U(z) (z) with (z) a Laurent polynomial diagonal matrix? (an hermitian matrix is diagonalizable by a unitary matrix) not always true (see example) if exist, "eigen

The identity matrix for multiplication for any square matrix A is the matrix I, such that IA A and AI A . A second-order matrix can be represented by . Since , the matrix is the identity matrix for multiplication for any second-order matrix. Multiplicative

15th AMC ! 8 1999 5 Problems 17, 18, and 19 refer to the following: Cookies For a Crowd At Central Middle School the 108 students who take the AMC! 8 meet in the evening to talk about prob-lems and eat an average of two cookies apiece. Walter and Gretel are baking Bonnie’s Best Bar Cookies this year. Their recipe, which makes a pan of 15 cookies, list these items: 11 2 cups of our, 2 eggs .