MATH 532: Linear Algebra

2y ago
47 Views
2 Downloads
1.48 MB
257 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Randy Pettway
Transcription

MATH 532: Linear AlgebraChapter 3: Matrix AlgebraGreg FasshauerDepartment of Applied MathematicsIllinois Institute of TechnologySpring 2015fasshauer@iit.eduMATH 5321

Outline1Introduction2Applications of Linear Systems3Matrix Multiplication4Matrix Inverse5Elementary Matrices and Equivalence6LU Factorizationfasshauer@iit.eduMATH 5322

IntroductionOutline1Introduction2Applications of Linear Systems3Matrix Multiplication4Matrix Inverse5Elementary Matrices and Equivalence6LU Factorizationfasshauer@iit.eduMATH 5323

IntroductionWe will briefly go over some ideas from Chapters 1, 2 and the first halfof Chapter 3 of the textbook [Mey00].After that introduction we will start our real journey with Section 3.7and the inverse of a matrix.fasshauer@iit.eduMATH 5324

IntroductionLinear algebra is an old subjectfasshauer@iit.eduMATH 5325

IntroductionLinear algebra is an old subjectThe origins are attributed to the solution of systems of linearequations in China around 200 BC [NinBC, Chapter 8].Look at Episode 2 (10:23–13:20) of The Story of Maths.Look at [Yua12].fasshauer@iit.eduMATH 5325

IntroductionLinear algebra is an old subjectThe origins are attributed to the solution of systems of linearequations in China around 200 BC [NinBC, Chapter 8].Look at Episode 2 (10:23–13:20) of The Story of Maths.Look at [Yua12].In the West, the same algorithm became known as Gaussianelimination, named after Carl Friedrich Gauß (1755–1855).fasshauer@iit.eduMATH 5325

IntroductionLinear algebra is an old subjectThe origins are attributed to the solution of systems of linearequations in China around 200 BC [NinBC, Chapter 8].Look at Episode 2 (10:23–13:20) of The Story of Maths.Look at [Yua12].In the West, the same algorithm became known as Gaussianelimination, named after Carl Friedrich Gauß (1755–1855).“Modern” linear algebra is associated with Arthur Cayley(1821–1895), and many others after him.fasshauer@iit.eduMATH 5325

IntroductionLinear algebra is an old subjectThe origins are attributed to the solution of systems of linearequations in China around 200 BC [NinBC, Chapter 8].Look at Episode 2 (10:23–13:20) of The Story of Maths.Look at [Yua12].In the West, the same algorithm became known as Gaussianelimination, named after Carl Friedrich Gauß (1755–1855).“Modern” linear algebra is associated with Arthur Cayley(1821–1895), and many others after him.Recent developments have focused mostly on numerical linearalgebra.fasshauer@iit.eduMATH 5325

Applications of Linear SystemsOutline1Introduction2Applications of Linear Systems3Matrix Multiplication4Matrix Inverse5Elementary Matrices and Equivalence6LU Factorizationfasshauer@iit.eduMATH 5326

Applications of Linear SystemsLinear algebra appears in many fields and guises:Numerical analysis: discretization of DEs [Mey00, Ch. 1.4]fasshauer@iit.eduMATH 5327

Applications of Linear SystemsLinear algebra appears in many fields and guises:Numerical analysis: discretization of DEs [Mey00, Ch. 1.4]Mechanical/structural engineering: plane trusses [Mol08]Electrical engineering: electric circuits [Mey00, Ch. 2.6]fasshauer@iit.eduMATH 5327

Applications of Linear SystemsLinear algebra appears in many fields and guises:Numerical analysis: discretization of DEs [Mey00, Ch. 1.4]Mechanical/structural engineering: plane trusses [Mol08]Electrical engineering: electric circuits [Mey00, Ch. 2.6]Data science and statistics: regressionmin kAx bk2x Rmfasshauer@iit.edu MATH 532x (AT A) 1 AT b7

Applications of Linear SystemsLinear algebra appears in many fields and guises:Numerical analysis: discretization of DEs [Mey00, Ch. 1.4]Mechanical/structural engineering: plane trusses [Mol08]Electrical engineering: electric circuits [Mey00, Ch. 2.6]Data science and statistics: regressionmin kAx bk2 x Rmx (AT A) 1 AT bMachine learning: regularization networksmin [L (b, Ax) µkxk] ,x Rnfasshauer@iit.edue.g., minn kAx bk2 µx T AxMATH 532x R7

Matrix MultiplicationOutline1Introduction2Applications of Linear Systems3Matrix Multiplication4Matrix Inverse5Elementary Matrices and Equivalence6LU Factorizationfasshauer@iit.eduMATH 5328

Matrix MultiplicationDifferent forms of matrix productsWe all know how to multiply two matrices A and B:fasshauer@iit.eduMATH 5329

Matrix MultiplicationDifferent forms of matrix productsWe all know how to multiply two matrices A and B:But why do we do it this way?fasshauer@iit.eduMATH 5329

Matrix MultiplicationDifferent forms of matrix productsWe all know how to multiply two matrices A and B:But why do we do it this way?Because Cayley said so.fasshauer@iit.eduMATH 5329

Matrix MultiplicationDifferent forms of matrix productsWe all know how to multiply two matrices A and B:But why do we do it this way?Because Cayley said so.Because it works for systems of linear equations and for lineartransformations, i.e., scalings, rotations, reflections and shearmaps can be expressed as a matrix product.fasshauer@iit.eduMATH 5329

Matrix MultiplicationMatrices as Linear TransformationsWe illustrate properties of linear transformations (matrix multiplicationby A) with the following “data”:X housedot2dot(X)fasshauer@iit.eduMATH 53210

Matrix MultiplicationStraight lines are always mapped to straight lines.A rand(2,2)dot2dot(A*X)Sample matrix 0.9357 0.7283A 0.8187 0.1758fasshauer@iit.eduMATH 53211

Matrix MultiplicationThe transformation is orientation-preserving1 if det A 0.A rand(2,2)det(A)dot2dot(A*X)Sample matrix 0.5694 0.4963A 0.0614 0.64231The door always stays on the left.fasshauer@iit.eduMATH 53212

Matrix MultiplicationThe angles between straight lines are preserved if the matrix isorthogonal2 .A orth(rand(2,2));A A(:,randperm(2))det(A)dot2dot(A*X)% creates orthogonal matrix% randomly permute columns of ASample matrix 0.7767 0.6299A 0.6299 0.77672An orthogonal matrix A has det A 1 and represents either a rotation or areflection.fasshauer@iit.eduMATH 53213

Matrix MultiplicationA linear transformation is invertible3 only if det A 6 0.a22 randi(3,1,1)-2 % creates random {-1,0,1}A triu(rand(2,2)); A(2,2) a22det(A)dot2dot(A*X)Sample matrix 0.0903 0.8586A 0 1.0000det A 0.09033If the transformation is not invertible, then the 2D image collapses to a line or evena point.fasshauer@iit.eduMATH 53214

Matrix MultiplicationA linear transformation is invertible3 only if det A 6 0.a22 randi(3,1,1)-2 % creates random {-1,0,1}A triu(rand(2,2)); A(2,2) a22det(A)dot2dot(A*X)Sample matrix 0.9884 0.3209A 00det A 03If the transformation is not invertible, then the 2D image collapses to a line or evena point.fasshauer@iit.eduMATH 53214

Matrix MultiplicationA diagonal matrix stretches theimage or reverses its orientation. 1 0A , det A 120 12A anti-diagonal matrix in additioninterchanges coordinates. 0 1A 1, det A 2102The action of a diagonal matrix provides an interpretation of the effectof eigenvalues. Note that these matrices have orthogonal columns, buttheir determinant is not 1, so they are not orthogonal matrices.These matrices preserve right angles only.fasshauer@iit.eduMATH 53215

Matrix MultiplicationAny rotation matrix can be expressed in terms of trigonometricfunctions:The matrix cos θ sin θG(θ) sin θ cos θrepresents a counter-clockwise rotation by the angle θ (measured inradians).Look at wiggle.m.fasshauer@iit.eduMATH 53216

Matrix MultiplicationMatrix multiplication: Why we do it the way we do itBecause the most obvious way, i.e.,[A B]ij [A]ij [B]ij ,known as Hadamard4 (or Schur) product, doesn’t work for linearsystems and linear transformations.4Jacques Hadamard (1865–1963) and Issai Schur (1875–1941)fasshauer@iit.eduMATH 53217

Matrix MultiplicationMatrix multiplication: Why we do it the way we do itBecause the most obvious way, i.e.,[A B]ij [A]ij [B]ij ,known as Hadamard4 (or Schur) product, doesn’t work for linearsystems and linear transformations.It’s also defined only for matrices of the same size.But it does commute.4Jacques Hadamard (1865–1963) and Issai Schur (1875–1941)fasshauer@iit.eduMATH 53217

Matrix MultiplicationMatrix multiplication: Why we do it the way we do itBecause the Frobenius5 (inner) product,X[A]ij [B]ij ,hA, BiF i,jdoesn’t work for linear systems or linear transformations either.5Georg Frobenius (1849–1917)fasshauer@iit.eduMATH 53218

Matrix MultiplicationMatrix multiplication: Why we do it the way we do itBecause the Frobenius5 (inner) product,X[A]ij [B]ij ,hA, BiF i,jdoesn’t work for linear systems or linear transformations either.It is also requires size(A) size(B).It does, however, induce a useful matrix norm (see HW).5Georg Frobenius (1849–1917)fasshauer@iit.eduMATH 53218

Matrix MultiplicationMatrix multiplication: Why we do it the way we do itBecause the Kronecker6 product, [A]11 B · · · .A B .[A]m1 B · · · [A]1n B. ,. [A]mn Bdoesn’t work for linear systems or linear transformations either.Works for matrices of arbitrary size, i.e., A is m n, B is p q.Ideal for working with tensor productsmultilinear algebra6Leopold Kronecker (1823–1891)fasshauer@iit.eduMATH 53219

Matrix MultiplicationModern research on matrix multiplicationHow to do them fast!fasshauer@iit.eduMATH 53220

Matrix MultiplicationModern research on matrix multiplicationHow to do them fast! Naive matrix multiplication of two n n matricesrequires O(n3 ) operations (and must be at least O(n2 ), since eachelement must be touched at least once)fasshauer@iit.eduMATH 53220

Matrix MultiplicationModern research on matrix multiplicationHow to do them fast! Naive matrix multiplication of two n n matricesrequires O(n3 ) operations (and must be at least O(n2 ), since eachelement must be touched at least once)Special algorithms for general matrices:Strassen’s algorithm [Str69] O(n2.807 ),Coppersmith–Winograd algorithm [CW90] O(n2.375 ),Stothers’ algorithm [DS13] O(n2.374 ),Williams’ algorithm [Wil14] O(n2.3729 ),Le Gall’s algorithm [LG14] O(n2.3728639 ).fasshauer@iit.eduMATH 53220

Matrix MultiplicationModern research on matrix multiplicationHow to do them fast! Naive matrix multiplication of two n n matricesrequires O(n3 ) operations (and must be at least O(n2 ), since eachelement must be touched at least once)Special algorithms for general matrices:Strassen’s algorithm [Str69] O(n2.807 ),Coppersmith–Winograd algorithm [CW90] O(n2.375 ),Stothers’ algorithm [DS13] O(n2.374 ),Williams’ algorithm [Wil14] O(n2.3729 ),Le Gall’s algorithm [LG14] O(n2.3728639 ).A bet: http://www.math.utah.edu/ pa/bet.htmlfasshauer@iit.eduMATH 53220

Matrix MultiplicationModern research on matrix multiplicationHow to do them fast! Naive matrix multiplication of two n n matricesrequires O(n3 ) operations (and must be at least O(n2 ), since eachelement must be touched at least once)Special algorithms for general matrices:Strassen’s algorithm [Str69] O(n2.807 ),Coppersmith–Winograd algorithm [CW90] O(n2.375 ),Stothers’ algorithm [DS13] O(n2.374 ),Williams’ algorithm [Wil14] O(n2.3729 ),Le Gall’s algorithm [LG14] O(n2.3728639 ).A bet: http://www.math.utah.edu/ pa/bet.htmlExploiting structure (banded, block, hierarchical) — often impliedby applicationUsing factorizations, into products of structured matricesExploiting sparsityExploiting new hardwarefasshauer@iit.eduMATH 53220

Matrix InverseOutline1Introduction2Applications of Linear Systems3Matrix Multiplication4Matrix Inverse5Elementary Matrices and Equivalence6LU Factorizationfasshauer@iit.eduMATH 53221

Matrix InverseDefinition (Matrix inverse)For any n n matrix A, the n n matrix B that satisfiesAB IandBA Iis called the inverse of A.We use the notation B A 1 to denote the inverse of A.fasshauer@iit.eduMATH 53222

Matrix InverseDefinition (Matrix inverse)For any n n matrix A, the n n matrix B that satisfiesAB IandBA Iis called the inverse of A.We use the notation B A 1 to denote the inverse of A.Terminology: If A 1 exists, then A is called nonsingular or invertible.fasshauer@iit.eduMATH 53222

Matrix InverseDefinition (Matrix inverse)For any n n matrix A, the n n matrix B that satisfiesAB IandBA Iis called the inverse of A.We use the notation B A 1 to denote the inverse of A.Terminology: If A 1 exists, then A is called nonsingular or invertible.Remark1The inverse of a matrix is unique.fasshauer@iit.eduMATH 53222

Matrix InverseDefinition (Matrix inverse)For any n n matrix A, the n n matrix B that satisfiesAB IandBA Iis called the inverse of A.We use the notation B A 1 to denote the inverse of A.Terminology: If A 1 exists, then A is called nonsingular or invertible.Remark1The inverse of a matrix is unique. To verify, assume B1 and B2 areboth inverses of A. ThenB1 B1 I B1 (AB2 ) (B1 A)B2 IB2 B2 .fasshauer@iit.eduMATH 53222

Matrix InverseDefinition (Matrix inverse)For any n n matrix A, the n n matrix B that satisfiesAB IandBA Iis called the inverse of A.We use the notation B A 1 to denote the inverse of A.Terminology: If A 1 exists, then A is called nonsingular or invertible.Remark1The inverse of a matrix is unique. To verify, assume B1 and B2 areboth inverses of A. ThenB1 B1 I B1 (AB2 ) (B1 A)B2 IB2 B2 .2Sometimes one can find the notion of a left- and right-inverse.However, we consider only inverses of square matrices, so thesenotions don’t apply (see also [Mey00, Ex. 3.7.2]).fasshauer@iit.eduMATH 53222

Matrix InverseHow to compute A 1fasshauer@iit.eduMATH 53223

Matrix InverseHow to compute A 1If we do it by hand, we use Gauss–Jordan elimination onfasshauer@iit.eduMATH 532 A I .23

Matrix InverseHow to compute A 1If we do it by hand, we use Gauss–Jordan elimination on A I .If we do it by computer, we solve AB I for B A 1 .In M ATLAB: invA A\eye(n)fasshauer@iit.eduMATH 53223

Matrix InverseHow to compute A 1If we do it by hand, we use Gauss–Jordan elimination on A I .If we do it by computer, we solve AB I for B A 1 .In M ATLAB: invA A\eye(n)ExampleCompute the inverse of 2 267 .A 2 12 6 7fasshauer@iit.eduMATH 53223

Matrix InverseSolution 2 26 1 0 0 2 17 0 1 0 2 6 7 0 0 1fasshauer@iit.eduMATH 53224

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1fasshauer@iit.eduMATH 53224

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 2610 0 0 11 1 1 0 0 0 21 7 8 1fasshauer@iit.eduMATH 53224

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 261 1 3 1210 0 0 11 1 1 0 0 1 1 10 0 21 7 8 10 0 1 13 fasshauer@iit.eduMATH 53201821 00 1 2124

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 261 1 3 1210 0 0 11 1 1 0 0 1 1 10 0 21 7 8 10 0 1 13 01821 00 1 21Up to here this is Gaussian eliminationfasshauer@iit.eduMATH 53224

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 261 1 3 1210 0 0 11 1 1 0 0 1 1 10 0 21 7 8 10 0 1 13 01821 00 1 21Up to here this is Gaussian elimination 11 1 0 32 877 0 1 0 2 13 1 32121810 0 1 13 2121fasshauer@iit.eduMATH 53224

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 261 1 3 1210 0 0 11 1 1 0 0 1 1 10 0 21 7 8 10 0 1 13 Up to here this is Gaussian elimination 11 1 0 32 871 0 0 5672131 0 1 0 21 21 0 1 0 233810 0 1 13 210 0 1 1321fasshauer@iit.eduMATH 53201821 1121 1321821 00 1 214211 211 21 24

Matrix InverseSolution 2 26 1 0 02 261 0 0 2 17 0 1 0 0 11 1 1 0 2 6 7 0 0 10 8 13 1 0 1 2 261 1 3 1210 0 0 11 1 1 0 0 1 1 10 0 21 7 8 10 0 1 13 Up to here this is Gaussian elimination 11 1 0 32 871 0 0 5672131 0 1 0 21 21 0 1 0 233810 0 1 13 210 0 1 132101821 1121 1321821 00 1 214211 211 21 Gauss–Jordan elimination is not good for solving linear systems, butuseful for some theoretical purposes.fasshauer@iit.eduMATH 53224

Matrix InverseHow to check if A is invertibleTheoremFor any n n matrix A, the following statements are equivalent:1A 1 exists2rank(A) n3Gauss–Jordan elimination reduces A to I4Ax 0 has only the trivial solution x 05det(A) 6 06Zero is not an eigenvalue of A7Zero is not a singular value of Afasshauer@iit.eduMATH 53225

Matrix InverseHow to check if A is invertibleTheoremFor any n n matrix A, the following statements are equivalent:1A 1 exists2rank(A) n3Gauss–Jordan elimination reduces A to I4Ax 0 has only the trivial solution x 05det(A) 6 06Zero is not an eigenvalue of A7Zero is not a singular value of AProof.Items (1)–(4) are proved in [Mey00]. Items (5)–(7) are discussed later(but should probably be familiar concepts).fasshauer@iit.eduMATH 53225

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .fasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .Proof.Just use the definition to verify invertibility:(AB) B 1 A 1 fasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .Proof.Just use the definition to verify invertibility: (AB) B 1 A 1 A BB 1 A 1fasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .Proof.Just use the definition to verify invertibility: (AB) B 1 A 1 A BB 1 A 1 {z } Ifasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .Proof.Just use the definition to verify invertibility: (AB) B 1 A 1 A BB 1 A 1 I {z } Ifasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix productTheoremIf A and B are invertible, then AB is also invertible and(AB) 1 B 1 A 1 .Proof.Just use the definition to verify invertibility: (AB) B 1 A 1 A BB 1 A 1 I {z } ISince the inverse is unique we are done.fasshauer@iit.eduMATH 53226

Matrix InverseInverse of a matrix sumA simple example shows that — just because A and B are invertible —the inverse of A B need not exist!ExampleLet 1 0A 0 1 andB 1 00 1 Then A B is the zero matrix, which is obviously not invertible.fasshauer@iit.eduMATH 53227

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.fasshauer@iit.eduMATH 53228

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):fasshauer@iit.eduMATH 53228

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):ExampleLet a 2 and b 3.fasshauer@iit.eduMATH 53228

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):ExampleLet a 2 and b 3. Thena b 5, and so (a b) 1 15 ;fasshauer@iit.eduMATH 53228

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):ExampleLet a 2 and b 3. Thena b 5, and so (a b) 1 15 ;a 1 12and b 1 13 .fasshauer@iit.eduMATH 53228

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):ExampleLet a 2 and b 3. Thena b 5, and so (a b) 1 15 ;a 1 12and b 1 13 .And now we see/know that(a b) 1 6 a 1 b 1fasshauer@iit.edusinceMATH 53211 16 .52 328

Matrix InverseInverse of a matrix sum (cont.)Moreover, the inverse is not a linear function.Even in the scalar case we have (the breaking point in the education ofmany a young “mathematician”?):ExampleLet a 2 and b 3. Thena b 5, and so (a b) 1 15 ;a 1 12and b 1 13 .And now we see/know that(a b) 1 6 a 1 b 1since11 16 .52 3So, how do we compute the inverse of A B?fasshauer@iit.eduMATH 53228

Matrix InverseIt can be done if one assumes that A and B are such that the inverseexists. The following theorem was proved only in 1981 [HS81].fasshauer@iit.eduMATH 53229

Matrix InverseIt can be done if one assumes that A and B are such that the inverseexists. The following theorem was proved only in 1981 [HS81].Theorem (Henderson–Searle)Suppose the n n matrix A is invertible, and let C be n p, B be p qand D be q n. Also assume that (A CBD) 1 exists. Then 11A 1 CBDA 1 ,(A CBD) 1 A 1 In A 1 CBD 12(A CBD) 1 A 1 A 1 In CBDA 1CBDA 1 , 13(A CBD) 1 A 1 A 1 C Ip BDA 1 CBDA 1 , 14(A CBD) 1 A 1 A 1 CB Iq DA 1 CBDA 1 , 15(A CBD) 1 A 1 A 1 CBD In A 1 CBDA 1 , 16(A CBD) 1 A 1 A 1 CBDA 1 In CBDA 1.fasshauer@iit.eduMATH 53229

Matrix InverseBefore we prove (part of) this theorem, let’s see what this says about(A B) 1 .CorollaryIn the theorem, let all matrices be n n and let C D I. Then 11A 1 BA 1 ,(A B) 1 A 1 I A 1 B 12(A B) 1 A 1 A 1 I BA 1BA 1 , 13(A B) 1 A 1 A 1 B I A 1 BA 1 , 1 4(A B) 1 A 1 A 1 BA 1 I BA 1.Note that only four formulas are left.fasshauer@iit.eduMATH 53230

Matrix InverseTo prove this theorem one needsLemmaSuppose A is an n n matrix such that I A is invertible. Then(I A) 1 I A (I A) 1 I (I A) 1 A.(1a)(1b)In particular,A (I A) 1 (I A) 1 A.fasshauer@iit.eduMATH 532(2)31

Matrix InverseTo prove this theorem one needsLemmaSuppose A is an n n matrix such that I A is invertible. Then(I A) 1 I A (I A) 1 I (I A) 1 A.(1a)(1b)In particular,A (I A) 1 (I A) 1 A.(2)Proof.(2) follows immediately from (1).fasshauer@iit.eduMATH 53231

Matrix InverseTo prove this theorem one needsLemmaSuppose A is an n n matrix such that I A is invertible. Then(I A) 1 I A (I A) 1 I (I A) 1 A.(1a)(1b)In particular,A (I A) 1 (I A) 1 A.(2)Proof.(2) follows immediately from (1).To prove (1), we start withI (I A) A.fasshauer@iit.eduMATH 53231

Matrix InverseTo prove this theorem one needsLemmaSuppose A is an n n matrix such that I A is invertible. Then(I A) 1 I A (I A) 1 I (I A) 1 A.(1a)(1b)In particular,A (I A) 1 (I A) 1 A.(2)Proof.(2) follows immediately from (1).To prove (1), we start withI (I A) A.Now multiply by (I A) 1 from either the right (to get (1a)), or from theleft (to get (1b)).fasshauer@iit.eduMATH 53231

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.fasshauer@iit.eduMATH 53232

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.Then 1(A CBD) 1 A In A 1 CBDfrom abovefasshauer@iit.eduMATH 53232

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.Then 1(A CBD) 1 A In A 1 CBDfrom above 1e 1 Be 1 A 1 In A 1 CBDA 1 since (AB)fasshauer@iit.eduMATH 53232

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.Then 1(A CBD) 1 A In A 1 CBDfrom above 1e 1 Be 1 A 1 In A 1 CBDA 1 since (AB) 1(1b) 1 1e A 1 CBD In In A CBDA CBD A 1 Afasshauer@iit.eduMATH 53232

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.Then 1(A CBD) 1 A In A 1 CBDfrom above 1e 1 Be 1 A 1 In A 1 CBDA 1 since (AB) 1(1b) 1 1e A 1 CBD In In A CBDA CBD A 1 A 1 A 1 In A 1 CBDA 1 CBDA 1 .fasshauer@iit.eduMATH 53232

Matrix InverseProof of the theorem.We prove only the first identity.We note that In A 1 CBD A 1 (A CBD), where both factors are 1invertible by assumption. Therefore In A 1 CBDexists.Then 1(A CBD) 1 A In A 1 CBDfrom above 1e 1 Be 1 A 1 In A 1 CBDA 1 since (AB) 1(1b) 1 1e A 1 CBD In In A CBDA CBD A 1 A 1 A 1 In A 1 CBDA 1 CBDA 1 .Note that the other identities are not proven analogously. They requireextra work.fasshauer@iit.eduMATH 53232

Matrix InverseSherman–Morrison formulaThe following formula is older (from 1949–50), but can also be derivedas a corollary from the Henderson–Searle theorem.CorollarySuppose that the n n matrix A is invertible, and also suppose thatα R and the column n-vectors c and d are such that1 αd T A 1 c 6 0. Then A αcd T is invertible and A αcd T 1 A 1 αA 1 cd T A 11 αd T A 1 c.Note that αcd T is a rank-1 update of A.fasshauer@iit.eduMATH 53233

Matrix InverseSherman–Morrison formulaThe following formula is older (from 1949–50), but can also be derivedas a corollary from the Henderson–Searle theorem.CorollarySuppose that the n n matrix A is invertible, and also suppose thatα R and the column n-vectors c and d are such that1 αd T A 1 c 6 0. Then A αcd T is invertible and A αcd T 1 A 1 αA 1 cd T A 11 αd T A 1 c.Note that αcd T is a rank-1 update of A.RemarkThe Sherman–Morrison–Woodbury formula follows analogously and isstated in [Mey00].fasshauer@iit.eduMATH 53233

Matrix InverseProof.We use the fourth identity of the Henderson–Searle theorem withB α, C c and D d T (so that p q 1).Then 1DA 1(A CBD) 1 A 1 A 1 CB Iq DA 1 CBbecomesfasshauer@iit.eduMATH 53234

Matrix InverseProof.We use the fourth identity of the Henderson–Searle theorem withB α, C c and D d T (so that p q 1).Then 1DA 1(A CBD) 1 A 1 A 1 CB Iq DA 1 CBbecomes 1 1A αcd T A 1 αA 1 c 1 αd T A 1 cd T A 1fasshauer@iit.eduMATH 53234

Matrix InverseProof.We use the fourth identity of the Henderson–Searle theorem withB α, C c and D d T (so that p q 1).Then 1DA 1(A CBD) 1 A 1 A 1 CB Iq DA 1 CBbecomes 1 1A αcd T A 1 αA 1 c 1 αd T A 1 cd T A 1 A 1 A 1 cd T A 11 αd T A 1 csince d T A 1 c is a scalar.fasshauer@iit.eduMATH 53234

Matrix InverseIf A I, α 1 and c, d such that d T c 6 1 in the Sherman–Morrisonformula, then we get fasshauer@iit.eduI cd T 1 I MATH 532cd TdT c 1.35

Matrix InverseIf A I, α 1 and c, d such that d T c 6 1 in the Sherman–Morrisonformula, then we get I cd T 1 I cd TdT c 1.I cd T is called an elementary matrixfasshauer@iit.eduMATH 53235

Matrix InverseIf A I, α 1 and c, d such that d T c 6 1 in the Sherman–Morrisonformula, then we get I cd T 1 I cd TdT c 1.I cd T is called an elementary matrix 1I cd Tis also an elementary matrix, i.e.,fasshauer@iit.eduMATH 53235

Matrix InverseIf A I, α 1 and c, d such that d T c 6 1 in the Sherman–Morrisonformula, then we get I cd T 1 I cd TdT c 1.I cd T is called an elementary matrix 1I cd Tis also an elementary matrix, i.e.,the inverse of an elementary matrix is an elementary matrix.fasshauer@iit.eduMATH 53235

Matrix InverseIf A I, α 1 and c, d such that d T c 6 1 in the Sherman–Morrisonformula, then we get I cd T 1 I cd TdT c 1.I cd T is called an elementary matrix 1I cd Tis also an elementary matrix, i.e.,the inverse of an elementary matrix is an elementary matrix.We will use such elementary matrices in the next section.fasshauer@iit.eduMATH 53235

Matrix InverseExampleAssume we have worked hard to calculate A 1 , and now we changeone entry, [A]ij , of A. What is the new inverse?fasshauer@iit.eduMATH 53236

Matrix InverseExampleAssume we have worked hard to calculate A 1 , and now we changeone entry, [A]ij , of A. What is the new inverse?Note that a change of α to [A]ij is given by αe i e Tj .fasshauer@iit.eduMATH 53236

Matrix InverseExampleAssume we have worked hard to calculate A 1 , and now we changeone entry, [A]ij , of A. What is the new inverse?Note that a change of α to [A]ij is given by αe i e Tj .We can apply the Sherman–Morrison formula with c e i , d e j : fasshauer@iit.eduA αe i e Tj 1 A 1 MATH 532αA 1 e i e Tj A 11 αe Tj A 1 e i36

Matrix InverseExampleAssume we have worked hard to calculate A 1 , and now we changeone entry, [A]ij , of A. What is the new inverse?Note that a change of α to [A]ij is given by αe i e Tj .We can apply the Sherman–Morrison formula with c e i , d e j : A αe i e Tj 1 A 1 αA 1 e i e Tj A 11 αe Tj A 1 e i A 1 αfasshauer@iit.eduMATH 532[A 1 ] i [A 1 ]j .1 α[A 1 ]ji36

Matrix InverseExampleAssume we have worked hard to calculate A 1 , and now we changeone entry, [A]ij , of A. What is the new inverse?Note that a change of α to [A]ij is given by αe i e Tj .We can apply the Sherman–Morrison formula with c e i , d e j : A αe i e Tj 1 A 1 αA 1 e i e Tj A 11 αe Tj A 1 e i A 1 α[A 1 ] i [A 1 ]j .1 α[A 1 ]jiNote that there’s no need to recompute the entire inverse (an O(n3 )effort). All we ne

“Modern” linear algebra is associated with Arthur Cayley . Introduction Linear algebra is an old subject The origins are attributed to thesolution of systems of linear equationsin China around 200 BC [NinBC, Chapter 8]. Look at Episode 2 (10:23–13:20) of The Story of Maths. Look at

Related Documents:

2 532 18 04-32 Decal, Fender 100 DB/CE 3 532 40 86-42 Decal, Fender 4 532 15 97-37 Decal, Brake/Clutch 5 532 41 67-41 Decal, Engine HP 6 532 40 03-89 Decal, Warning 7 532 40 81-81 Decal, Hood RH 8 532 16 03-96 Decal, V-Belt Schematic 9 532 15 97-36 Decal, Chassis Hot

21 532 14 08-45 Knob Deluxe 22 532 17 56-07 Rod, Brake 24 873 35 06-00 Nut 25 532 19 20-36 Spring, Rod, Brake 27 876 02 04-12 Pin, Cotter 1/8 x 3/4 28 532 17 57-65 Rod, Parking Brake 29 532 07 16-73 Cap, Parking Brake 30 532 16 95-92 Bracket, Mfg. Transaxle

grade. 3. Setting pay upon removal during a probationary period. 4. The requirement for OPDIVs, in conjunction with their OHROs to: 1) institute wage . 532-1-60 Within-Grade Increases 532-1-70 Application of Wage Schedules 532-1-80 Movements between Pay Systems, Wage Schedules, and Wage Areas 532-1-90 Environmental Differential Pay 532-1-100 .

Robert Gerver, Ph.D. North Shore High School 450 Glen Cove Avenue Glen Head, NY 11545 gerverr@northshoreschools.org Rob has been teaching at . Algebra 1 Financial Algebra Geometry Algebra 2 Algebra 1 Geometry Financial Algebra Algebra 2 Algebra 1 Geometry Algebra 2 Financial Algebra ! Concurrently with Geometry, Algebra 2, or Precalculus

Math 5/4, Math 6/5, Math 7/6, Math 8/7, and Algebra 1/2 Math 5/4, Math 6/5, Math 7/6, Math 8/7, and Algebra ½ form a series of courses to move students from primary grades to algebra. Each course contains a series of daily lessons covering all areas of general math. Each lesson

High-level description of course goals: 1. linear algebra theory; 2. linear algebra computa-tional skills; 3. introduction to abstract math. Today’s topic: introduction to linear algebra. Conceptually, linear algebra is about sets of quantities (a.k.a. vectors

So you can help us find X Teacher/Class Room Pre-Algebra C-20 Mrs. Hernandez Pre-Algebra C-14 . Kalscheur Accelerated Math C-15 Mrs. Khan Honors Algebra 2 Honors Geometry A-21 Mrs. King Math 7 Algebra 1 Honors Algebra 1 C-19 Mrs. Looft Honors Algebra C-16 Mr. Marsh Algebra 1 Honors Geometry A-24 Mrs. Powers Honors Pre-Algebra C-18 Mr. Sellaro .

INTRODUCTION TO LINEAR ALGEBRA AND S-LINEAR ALGEBRA 1.1 Basic properties of linear algebra 7 1.2 Introduction to s-linear algebra 15 1.3 Some aapplications of S-linear algebra 30 Chapter Two INTRODUCTORY COCEPTS OF BASIC BISTRUCTURES AND S-BISTRUCTU