The Matrix Cookbook - University Of Waterloo

4m ago
10 Views
1 Downloads
1,005.04 KB
72 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Elisha Lemon
Transcription

The Matrix Cookbook [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 15, 2012 1

Introduction What is this? These pages are a collection of facts (identities, approximations, inequalities, relations, .) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference . Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at cookbook@2302.dk. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome acookbook@2302.dk. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, di erentiate a matrix. Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rishøj, Christian Schröppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripidis Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, Jürgen Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Markus Froeb, Michael Hubatka, Miguel Barão, Ole Winther, Pavel Sakov, Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2

CONTENTS CONTENTS Contents 1 Basics 1.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 2 Derivatives 2.1 Derivatives 2.2 Derivatives 2.3 Derivatives 2.4 Derivatives 2.5 Derivatives 2.6 Derivatives 2.7 Derivatives 2.8 Derivatives of of of of of of of of a Determinant . . . . . . . . . . . . an Inverse . . . . . . . . . . . . . . . Eigenvalues . . . . . . . . . . . . . . Matrices, Vectors and Scalar Forms Traces . . . . . . . . . . . . . . . . . vector norms . . . . . . . . . . . . . matrix norms . . . . . . . . . . . . . Structured Matrices . . . . . . . . . 3 Inverses 3.1 Basic . . . . . . . . . . . 3.2 Exact Relations . . . . . 3.3 Implication on Inverses . 3.4 Approximations . . . . . 3.5 Generalized Inverse . . . 3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 6 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 8 9 10 10 12 14 14 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 17 18 20 20 21 21 4 Complex Matrices 24 4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Higher order and non-linear derivatives . . . . . . . . . . . . . . . 26 4.3 Inverse of complex sum . . . . . . . . . . . . . . . . . . . . . . . 27 5 Solutions and Decompositions 5.1 Solutions to linear equations . 5.2 Eigenvalues and Eigenvectors 5.3 Singular Value Decomposition 5.4 Triangular Decomposition . . 5.5 LU decomposition . . . . . . 5.6 LDM decomposition . . . . . 5.7 LDL decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 28 30 31 32 32 33 33 6 Statistics and Probability 34 6.1 Definition of Moments . . . . . . . . . . . . . . . . . . . . . . . . 34 6.2 Expectation of Linear Combinations . . . . . . . . . . . . . . . . 35 6.3 Weighted Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 36 7 Multivariate Distributions 7.1 Cauchy . . . . . . . . . . 7.2 Dirichlet . . . . . . . . . . 7.3 Normal . . . . . . . . . . 7.4 Normal-Inverse Gamma . 7.5 Gaussian . . . . . . . . . . 7.6 Multinomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 37 37 37 37 37 37 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3

CONTENTS 7.7 7.8 7.9 CONTENTS Student’s t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wishart, Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Gaussians 8.1 Basics . . . . . . . . 8.2 Moments . . . . . . 8.3 Miscellaneous . . . . 8.4 Mixture of Gaussians 37 38 39 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 40 42 44 44 9 Special Matrices 9.1 Block matrices . . . . . . . . . . . . . . . . 9.2 Discrete Fourier Transform Matrix, The . . 9.3 Hermitian Matrices and skew-Hermitian . . 9.4 Idempotent Matrices . . . . . . . . . . . . . 9.5 Orthogonal matrices . . . . . . . . . . . . . 9.6 Positive Definite and Semi-definite Matrices 9.7 Singleentry Matrix, The . . . . . . . . . . . 9.8 Symmetric, Skew-symmetric/Antisymmetric 9.9 Toeplitz Matrices . . . . . . . . . . . . . . . 9.10 Transition matrices . . . . . . . . . . . . . . 9.11 Units, Permutation and Shift . . . . . . . . 9.12 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 46 47 48 49 49 50 52 54 54 55 56 57 10 Functions and Operators 10.1 Functions and Series . . . . . 10.2 Kronecker and Vec Operator 10.3 Vector Norms . . . . . . . . . 10.4 Matrix Norms . . . . . . . . . 10.5 Rank . . . . . . . . . . . . . . 10.6 Integral Involving Dirac Delta 10.7 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 58 59 61 61 62 62 63 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functions . . . . . . . . . . . . . . . . . . . . . . . . A One-dimensional Results 64 A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 65 B Proofs and Details 66 B.1 Misc Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4

CONTENTS CONTENTS Notation and Nomenclature A Aij Ai Aij An A 1 A A1/2 (A)ij Aij [A]ij a ai ai a z z Z z z Z Matrix Matrix indexed for some purpose Matrix indexed for some purpose Matrix indexed for some purpose Matrix indexed for some purpose or The n.th power of a square matrix The inverse matrix of the matrix A The pseudo inverse matrix of the matrix A (see Sec. 3.6) The square root of a matrix (if unique), not elementwise The (i, j).th entry of the matrix A The (i, j).th entry of the matrix A The ij-submatrix, i.e. A with i.th row and j.th column deleted Vector (column-vector) Vector indexed for some purpose The i.th element of the vector a Scalar Real part of a scalar Real part of a vector Real part of a matrix Imaginary part of a scalar Imaginary part of a vector Imaginary part of a matrix det(A) Tr(A) diag(A) eig(A) vec(A) sup A AT A T A AH Determinant of A Trace of the matrix A Diagonal matrix of the matrix A, i.e. (diag(A))ij ij Aij Eigenvalues of the matrix A The vector-version of the matrix A (see Sec. 10.2.2) Supremum of a set Matrix norm (subscript if any denotes what norm) Transposed matrix The inverse of the transposed and vice versa, A T (A 1 )T (AT ) Complex conjugated matrix Transposed and complex conjugated matrix (Hermitian) A B A B Hadamard (elementwise) product Kronecker product 0 I Jij The null matrix. Zero in all entries. The identity matrix The single-entry matrix, 1 at (i, j) and zero elsewhere A positive definite matrix A diagonal matrix Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5 1 .

1 1 Basics (AB) 1 (ABC.) 1 T 1 (A ) 1 A .C 1 1 T B 1 (A T A B (AB)T BT A T H T (3) (4) (5) T .C B A (A ) (A B)H (A ) AH BH (AB)H BH A H (ABC.) (2) T 1 H A 1 ) T (1) 1 B (ABC.) 1.2 T (A B) 1.1 BASICS T (6) 1 H H (7) (8) (9) H .C B A H (10) Trace Tr(A) Tr(A) P Aii (11) Tr(A) Pi Tr(A ) (13) Tr(AB) Tr(BA) (14) Tr(A B) Tr(A) Tr(B) (15) i i, T i eig(A) (12) Tr(ABC) Tr(BCA) Tr(CAB) (16) aT a Tr(aaT ) (17) Determinant Let A be an n n matrix. det(A) det(cA) det(AT ) Q det(A) det(AB) i i i eig(A) cn det(A), (18) if A 2 Rn n (19) (20) det(A) det(B) (21) ) 1/ det(A) (22) det(An ) det(A)n (23) det(A 1 T det(I uv ) T 1 u v (24) det(I A) 1 det(A) Tr(A) (25) For n 2: For n 3: 1 det(I A) 1 det(A) Tr(A) Tr(A)2 2 1 Tr(A2 ) 2 (26) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6

1.3 The Special Case 2x2 1 BASICS For n 4: det(I A) 1 det(A) Tr(A) Tr(A)2 1 Tr(A)3 6 1 2 1 Tr(A2 ) 2 1 1 Tr(A)Tr(A2 ) Tr(A3 ) 2 3 (27) For small ", the following approximation holds 1 det(I "A) 1 det(A) "Tr(A) "2 Tr(A)2 2 1.3 1 2 " Tr(A2 ) 2 (28) The Special Case 2x2 Consider the matrix A A A11 A21 A12 A22 Determinant and trace det(A) A11 A22 A12 A21 (29) Tr(A) A11 A22 (30) Eigenvalues 2 1 p Tr(A) Tr(A)2 2 1 Eigenvectors v1 / 4 det(A) 2 2 Tr(A) 1 1 det(A) 1 2 A12 A11 1 Inverse A · Tr(A) det(A) 0 v2 / A22 A21 Tr(A) p Tr(A)2 2 4 det(A) det(A) A12 A11 2 A12 A11 (31) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7

2 2 DERIVATIVES Derivatives This section is covering di erentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section 2.8 for di erentiation of structured matrices. The basic assumptions can be written in a formula as that is for e.g. vector forms, @x @xi @y i @y @Xkl @Xij ik lj @x @yi @x @y i (32) @x @y ij @xi @yj The following rules are general and very useful when deriving the di erential of an expression ([19]): @A @( X) @(X Y) @(Tr(X)) @(XY) @(X Y) @(X Y) @(X 1 ) @(det(X)) @(det(X)) @(ln(det(X))) @XT @XH 2.1 2.1.1 0 @X @X @Y Tr(@X) (@X)Y X(@Y) (@X) Y X (@Y) (@X) Y X (@Y) X 1 (@X)X 1 Tr(adj(X)@X) det(X)Tr(X 1 @X) Tr(X 1 @X) (@X)T (@X)H (A is a constant) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) Derivatives of a Determinant General form @ det(Y) @x X @ det(X) Xjk @Xik det(Y)Tr Y ij (46) @x det(X) k @ 2 det(Y) @x2 1 @Y " (47) " det(Y) Tr Y @Y 1 @ @x @x @Y Tr Y 1 Tr Y @x 1 @Y Tr Y Y @x # 1 @Y @x 1 @Y @x # (48) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8

2.2 Derivatives of an Inverse 2.1.2 2 DERIVATIVES Linear forms @ det(X) @X X @ det(X) Xjk @Xik det(X)(X ij 1 T ) (49) det(X) (50) k @ det(AXB) @X 2.1.3 det(AXB)(X 1 T ) det(AXB)(XT ) 1 (51) Square forms If X is square and invertible, then @ det(XT AX) 2 det(XT AX)X @X T (52) If X is not square but A is symmetric, then @ det(XT AX) 2 det(XT AX)AX(XT AX) @X 1 (53) If X is not square and A is not symmetric, then @ det(XT AX) det(XT AX)(AX(XT AX) @X 2.1.4 1 AT X(XT AT X) 1 ) (54) Other nonlinear forms Some special cases are (See [9, 7]) @ ln det(XT X) @X @ ln det(XT X) @X @ ln det(X) @X @ det(Xk ) @X 2.2 2(X )T (55) 2XT (56) 1 T ) (XT ) (X k det(Xk )X T 1 (57) (58) Derivatives of an Inverse From [27] we have the basic identity @Y 1 @x Y 1 @Y @x Y 1 (59) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 9

2.3 Derivatives of Eigenvalues 2 DERIVATIVES from which it follows @(X 1 )kl @Xij (X 1 )ki (X 1 )jl (60) @aT X 1 b X T abT X T (61) @X @ det(X 1 ) det(X 1 )(X 1 )T (62) @X @Tr(AX 1 B) (X 1 BAX 1 )T (63) @X @Tr((X A) 1 ) ((X A) 1 (X A) 1 )T (64) @X From [32] we have the following result: Let A be an n n invertible square matrix, W be the inverse of A, and J(A) is an n n -variate and di erentiable function with respect to A, then the partial di erentials of J with respect to A and W satisfy @J @J A T A T @A @W 2.3 Derivatives of Eigenvalues @ X @ eig(X) Tr(X) I (65) @X @X @ Y @ eig(X) det(X) det(X)X T (66) @X @X If A is real and symmetric, i and vi are distinct eigenvalues and eigenvectors of A (see (276)) with viT vi 1, then [33] @ i @vi 2.4 2.4.1 viT @(A)vi ( iI (67) A) @(A)vi (68) Derivatives of Matrices, Vectors and Scalar Forms First Order @xT a @x @aT Xb @X @aT XT b @X @aT Xa @X @X @Xij @(XA)ij @Xmn @(XT A)ij @Xmn @aT x @x abT (70) baT (71) @aT XT a @X Jij a (69) aaT (72) (73) im (A)nj (Jmn A)ij (74) in (A)mj (Jnm A)ij (75) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 10

2.4 Derivatives of Matrices, Vectors and Scalar Forms 2.4.2 2 DERIVATIVES Second Order @ X Xkl Xmn @Xij @bT XT Xc @X @(Bx b)T C(Dx d) @x @(XT BX)kl @Xij @(XT BX) @Xij X Xkl (76) X(bcT cbT ) (77) BT C(Dx d) DT CT (Bx b) (78) klmn 2 kl lj (X T B)ki kj (BX)il XT BJij Jji BX (79) (Jij )kl ik jl (80) See Sec 9.7 for useful properties of the Single-entry matrix Jij @xT Bx @x @bT XT DXc @X @ (Xb c)T D(Xb c) @X (B BT )x (81) DT XbcT DXcbT (82) (D DT )(Xb c)bT (83) Assume W is symmetric, then @ (x As)T W(x As) @s @ (x s)T W(x s) @x @ (x s)T W(x s) @s @ (x As)T W(x As) @x @ (x As)T W(x As) @A 2AT W(x 2W(x s) 2W(x 2W(x As) (84) (85) s) (86) As) (87) As)sT 2W(x (88) As a case with complex values the following holds @(a xH b)2 @x 2b(a xH b) (89) This formula is also known from the LMS algorithm [14] 2.4.3 Higher-order and non-linear n X1 @(Xn )kl (Xr Jij Xn @Xij r 0 1 r )kl (90) For proof of the above, see B.1.3. n X1 @ T n a X b (Xr )T abT (Xn @X r 0 1 r T ) (91) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 11

2.5 Derivatives of Traces @ T n T n a (X ) X b @X 2 n X1 h Xn 1 r DERIVATIVES abT (Xn )T Xr r 0 (Xr )T Xn abT (Xn 1 r T ) i (92) See B.1.3 for a proof. Assume s and r are functions of x, i.e. s s(x), r r(x), and that A is a constant, then T T @ T @s @r s Ar Ar AT s (93) @x @x @x @ (Ax)T (Ax) @x (Bx)T (Bx) 2.4.4 @ xT AT Ax @x xT BT Bx AT Ax xT AT AxBT Bx 2 T 2 x BBx (xT BT Bx)2 (94) (95) Gradient and Hessian Using the above we have for the gradient and the Hessian f @f rx f @x @2f @x@xT 2.5 xT Ax bT x (96) (A AT )x b (97) A AT (98) Derivatives of Traces Assume F (X) to be a di erentiable function of each of the elements of X. It then holds that @Tr(F (X)) f (X)T @X where f (·) is the scalar derivative of F (·). 2.5.1 First Order @ Tr(X) @X @ Tr(XA) @X @ Tr(AXB) @X @ Tr(AXT B) @X @ Tr(XT A) @X @ Tr(AXT ) @X @ Tr(A X) @X I (99) AT (100) A T BT (101) BA (102) A (103) A (104) Tr(A)I (105) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 12

2.5 Derivatives of Traces 2.5.2 2 DERIVATIVES Second Order @ Tr(X2 ) @X @ Tr(X2 B) @X @ Tr(XT BX) @X @ Tr(BXXT ) @X @ Tr(XXT B) @X @ Tr(XBXT ) @X @ Tr(BXT X) @X @ Tr(XT XB) @X @ Tr(AXBX) @X @ Tr(XT X) @X @ Tr(BT XT CXB) @X @ Tr XT BXC @X @ Tr(AXBXT C) @X h i @ Tr (AXB C)(AXB C)T @X @ Tr(X X) @X 2XT (106) (XB BX)T (107) BX BT X (108) BX BT X (109) BX BT X (110) XBT XB (111) XBT XB (112) XBT XB (113) AT XT BT BT XT AT (114) @ Tr(XXT ) @X (115) CT XBBT CXBBT (116) BXC BT XCT (117) AT CT XBT CAXB (118) 2AT (AXB C)BT (119) @ Tr(X)Tr(X) 2Tr(X)I(120) @X k(Xk 2X See [7]. 2.5.3 Higher Order @ Tr(Xk ) @X @ Tr(AXk ) @X T T T @ @X Tr B X CXX CXB k X1 1 T ) (Xr AXk (121) r 1 T ) (122) r 0 CXXT CXBBT CT XBBT XT CT X CXBBT XT CX CT XXT CT XBBT (123) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 13

2.6 Derivatives of vector norms 2 Other @ Tr(AX 1 B) (X 1 BAX 1 )T @X Assume B and C to be symmetric, then DERIVATIVES 2.5.4 h h @ Tr (XT CX) @X @ Tr (XT CX) @X h @ Tr (A XT CX) @X 1 1 1 A (XT BX) (XT BX) i X T A T BT X (CX(XT CX) 1 i 2CX(XT CX) 1 i T (124) )(A AT )(XT CX) XT BX(XT CX) 2BX(XT CX) 1 2CX(A XT CX) 1 1 XT BX(A XT CX) 1 2.6 2.6.1 cos(X)T Two-norm a 2 x a x a 2 @ x a I @x kx ak2 kx ak2 (x a)(x a)T kx ak32 @ x 22 @ xT x 2 2x @x @x 2.7 (128) Derivatives of vector norms @ x @x (129) (130) (131) Derivatives of matrix norms For more on matrix norms, see Sec. 10.4. 2.7.1 Frobenius norm @ @ X 2F Tr(XXH ) 2X (132) @X @X See (248). Note that this is also a special case of the result in equation 119. 2.8 1 (127) See [7]. (125) (126) 2BX(A XT CX) @Tr(sin(X)) @X 1 Derivatives of Structured Matrices Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc. In that case the derivatives of the previous section does not apply in general. Instead, consider the following general rule for di erentiating a scalar function f (A) " # T X @f @Akl df @f @A Tr (133) dAij @Akl @Aij @A @Aij kl Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 14

2.8 Derivatives of Structured Matrices 2 DERIVATIVES The matrix di erentiated with respect to itself is in this document referred to as the structure matrix of A and is defined simply by @A Sij @Aij (134) If A has no special structure we have simply Sij Jij , that is, the structure matrix is simply the single-entry matrix. Many structures have a representation in singleentry matrices, see Sec. 9.7.6 for more examples of structure matrices. 2.8.1 The Chain Rule Sometimes the objective is to find the derivative of a matrix which is a function of another matrix. Let U f (X), the goal is to find the derivative of the function g(U) with respect to X: @g(U) @g(f (X)) @X @X (135) Then the Chain Rule can then be written the following way: M N @g(U) @g(U) X X @g(U) @ukl @X @xij @ukl @xij (136) k 1 l 1 Using matrix notation, this can be written as: h @g(U) @g(U) @U i Tr ( )T . @Xij @U @Xij 2.8.2 (137) Symmetric If A is symmetric, then Sij Jij Jji Jij Jij and therefore df @f @f dA @A @A T diag @f @A (138) That is, e.g., ([5]): @Tr(AX) @X @ det(X) @X @ ln det(X) @X 2.8.3 A AT det(X)(2X 1 (X 2X 1 1 I) (A I), see (142) (X 1 I)) (139) (140) (141) Diagonal If X is diagonal, then ([19]): @Tr(AX) @X A I (142) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 15

2.8 Derivatives of Structured Matrices 2.8.4 2 DERIVATIVES Toeplitz Like symmetric matrices and diagonal matrices also Toeplitz matrices has a special structure which should be taken into account when the derivative with respect to a matrix with Toeplitz structure. @Tr(AT) @T @Tr(TA) 2 @T 6 6 6 6 6 4 (143) Tr(A) Tr([AT ]n1 ) Tr([AT ]1n )) Tr(A) Tr([[AT ]1n ]2,n . . . A1n Tr([[AT ]1n ]n . . 1) . . . . . . . ··· . . . ··· 1,2 ) . . . . . . Tr([[AT ]1n ]2,n 1) . . . 3 An1 . . . . Tr([[AT ]1n ]n . . Tr([AT ]1n )) 1,2 ) Tr([AT ]n1 ) Tr(A) (A) 7 7 7 7 7 5 As it can be seen, the derivative (A) also has a Toeplitz structure. Each value in the diagonal is the sum of all the diagonal valued in A, the values in the diagonals next to the main diagonal equal the sum of the diagonal next to the main diagonal in AT . This result is only valid for the unconstrained Toeplitz matrix. If the Toeplitz matrix also is symmetric, the same derivative yields @Tr(AT) @Tr(TA) (A) (A)T @T @T (A) I (144) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 16

3 3 3.1 3.1.1 INVERSES Inverses Basic Definition The inverse A 1 of a matrix A 2 Cn n is defined such that 1 AA A 1 A I, (145) where I is the n n identity matrix. If A 1 exists, A is said to be nonsingular. Otherwise, A is said to be singular (see e.g. [12]). 3.1.2 Cofactors and Adjoint The submatrix of a matrix A, denoted by [A]ij is a (n 1) (n 1) matrix obtained by deleting the ith row and the jth column of A. The (i, j) cofactor of a matrix is defined as cof(A, i, j) ( 1)i j det([A]ij ), The matrix of cofactors can be created from the cofactors 2 3 cof(A, 1, 1) ··· cof(A, 1, n) 6 7 6 7 6 7 . . cof(A) 6 7 . cof(A, i, j) . 6 7 4 5 cof(A, n, 1) ··· cof(A, n, n) (146) (147) The adjoint matrix is the transpose of the cofactor matrix adj(A) (cof(A))T , 3.1.3 (148) Determinant The determinant of a matrix A 2 Cn n is defined as (see [12]) det(A) n X j 1 n X ( 1)j 1 A1j det ([A]1j ) (149) A1j cof(A, 1, j). (150) j 1 3.1.4 Construction The inverse matrix can be constructed, using the adjoint matrix, by A 1 1 · adj(A) det(A) (151) For the case of 2 2 matrices, see section 1.3. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 17

3.2 Exact Relations 3.1.5 3 INVERSES Condition number The condition number of a matrix c(A) is the ratio between the largest and the smallest singular value of a matrix (see Section 5.3 on singular values), d d c(A) (152) The condition number can be used to measure how singular a matrix is. If the condition number is large, it indicates that the matrix is nearly singular. The condition number can also be estimated from the matrix norms. Here c(A) kAk · kA 1 k, (153) where k · k is a norm such as e.g the 1-norm, the 2-norm, the 1-norm or the Frobenius norm (see Sec 10.4p for more on matrix norms). The 2-norm of A equals (max(eig(AH A))) [12, p.57]. For a symmetric matrix, this reduces to A 2 max( eig(A) ) [12, p.394]. If the matrix is symmetric and positive definite, A 2 max(eig(A)). The condition number based on the 2-norm thus reduces to kAk2 kA 3.2 3.2.1 1 k2 max(eig(A)) max(eig(A )) max(eig(A)) . min(eig(A)) (154) Exact Relations Basic (AB) 3.2.2 1 1 B 1 1 A (155) The Woodbury identity The Woodbury identity comes in many variants. The latter of the two can be found in [12] (A CBCT ) 1 (A UBV) 1 A 1 A 1 A 1 A 1 C(B 1 U(B 1 CT A VA 1 1 1 C) U) 1 CT A VA 1 1 (156) (157) If P, R are positive definite, then (see [30]) (P 3.2.3 1 BT R 1 B) 1 BT R 1 PBT (BPBT R) 1 (158) 1 1 (159) The Kailath Variant (A BC) 1 A 1 A 1 B(I CA B) 1 CA See [4, page 153]. 3.2.4 Sherman-Morrison (A bcT ) 1 A 1 A 1 bcT A 1 1 cT A 1 b (160) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 18

3.2 Exact Relations 3.2.5 3 INVERSES The Searle Set of Identities The following set of identities, can be found in [25, page 151], 1 (I A T A 1 1 B A(A I) 1 B A ) 1 A(A B) 1 B 1 (I AB) 1 I 1 A A(I BA) 1 (I AB) 1 B(A B) (A B)B 1 A(I BA) 1 1 B) 1 B B(A B) 1 B A 1 B(I B A A A(A B) (161) T 1 A 3.2.6 1 (A BB ) (A 1 ) (162) 1 A B (163) (164) (165) B 1 (166) (167) Rank-1 update of inverse of inner product Denote A (XT X) 1 and that X is extended to include a new column vector in the end X̃ [X v]. Then [34] " # T vvT XAT AXT v A vAX T v vT XAXT v T T T T 1 v v v XAX v (X̃ X̃) vT XAT 1 vT v vT XAXT v 3.2.7 vT v vT XAXT v Rank-1 update of Moore-Penrose Inverse The following is a rank-1 update for the Moore-Penrose pseudo-inverse of real valued matrices and proof can be found in [18]. The matrix G is defined below: (A cdT ) A G (168) Using the the notation 1 d T A c v A c n (A )T d w m (I (I (169) (170) (171) AA )c T A A) d (172) (173) the solution is given as six di erent cases, depending on the entities w , m , and . Please note, that for any (column) vector v it holds that v vT vT (vT v) 1 v 2 . The solution is: Case 1 of 6: If w 6 0 and m 6 0. Then G vw (m )T nT (m )T w 1 1 vwT mnT mwT w 2 m 2 m 2 w 2 Case 2 of 6: If w 0 and m 6 0 and G (174) (175) 0. Then vv A (m )T nT 1 1 vvT A mnT 2 v m 2 (176) (177) Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 19

3.3 Implication on Inverses 3 Case 3 of 6: If w 0 and G 1 T mv A 6 0. Then v 2 m 2 2 v 2 Case 4 of 6: If w 6 0 and m 0 and G Case 5 of 6: If m 0 and 1 A nwT 6 0. Then n 2 w 2 2 (A ) v n T (178) w 2 A n v (179) (180) n 2 w n T (181) 0. Then vv A A nn v A nvn 1 1 v T A n vvT A A nnT vnT 2 2 v n v 2 n 2 3.3 T 0. Then Case 6 of 6: If w 0 and m 0 and G m 2 A nn vw 1 1 A nnT vwT n 2 w 2 G m v INVERSES (182) (183) Implication on Inverses If (A B) 1 1 A 1 B then AB 1 A BA 1 B (184) See [25]. 3.3.1 A PosDef identity Assume P, R to be positive definite and invertible, then (P 1 BT R 1 B) 1 BT R 1 PBT (BPBT R) 1 (185) See [30]. 3.4 Approximations The following identity is known as the Neuman series of a matrix, which holds when i 1 for all eigenvalues i (I A) 1 1 X An (186) ( 1)n An (187) n 0 which is equivalent to 1 (I A) 1 X n 0 When i 1 for all eigenvalues following approximations holds i, A) 1 (I A) 1 (I it holds that A ! 0 for n ! 1, and the I A A2 (188) 2 (189) I A A Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 20

3.5 Generalized Inverse 3 INVERSES The following approximation is from [22] and holds when A large and symmetric A If 2 1 A(I A) A I A 1 (190) is small compared to Q and M then 2 (Q M) Q 1 1 2 Q 1 MQ 1 (191) Proof: 2 (Q 1 (QQ 2 Q 2 ((I Q 1 (I 2 (192) Q) 1 (193) )Q) 1 (194) 1 1 1 MQ MQ 1 M) 1 MQ ) (195) This can be rewritten using the Taylor expansion: Q Q 3.5 3.5.1 1 (I 2 MQ 1 1 (I ( 2 2 MQ MQ 1 2 ) 1 ) 1 .) (196) Q 1 2 Q (197) Definition The matrix A 3.6.1 MQ 1 Generalized Inverse A generalized inverse matrix of the matrix A is any matrix A [26]) AA A A 3.6 1 such that (see (198) is not unique. Pseudo Inverse Definition The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A that fulfils I AA A A II A AA A III AA symmetric IV A A symmetric The matrix A is unique and does always exist. Note that in case of complex matrices, the symmetric condition is substituted by a condition of being Hermitian. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 21

3.6 Pseudo Inverse 3.6.2 3 INVERSES Properties Assume A to be the pseudo-inverse of A, then (See [3] for some of them) (A ) A (AT ) (A )T (200) H (201) H (A ) (199) (A ) (A ) (202) (A A)AH AH (203) 6 T (A ) (A A)A T A (204) (cA) (1/c)A A T A T (A A) T (AA ) A A H (A A) H (AA ) H f (AA ) (A A) A (206) AT (AAT ) (207) T A (A ) T (A ) A H (208) (209) (A A) A H (210) AH (AAH ) (211) H A (A ) H (A ) A (212) (213) (A AB) (ABB ) f (0)I A [f (AAH ) (AB) f (AH A) (205) T f (0)I H A[f (A A) (214) f (0)I]A (215) (216) f (0)I]A where A 2 Cn m . Assume A to have full rank, then (AA )(AA ) AA (A A)(A A) A A Tr(AA ) rank(AA ) Tr(A A) (217) (218) (See [26]) (219) (See [26]) (220) rank(A A) (A AB) (ABB ) For two matrices it hold that (AB) (A B) 3.6.3 A B (221) (222) Construction Assume that A has full rank, then A n n A n m A n m Square Broad Tall rank(A) n rank(A) n rank(A) m ) ) ) A A 1 A AT (AAT ) 1 A (AT A) 1 AT The so-called ”broad version” is also known as right inverse and the ”tall version” as the left inverse. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 22

3.6 Pseudo Inverse 3 INVERSES Assume A does not have full rank, i.e. A is n m and rank(A) r min(n, m). The pseudo inverse A can be constructed from the singular value decomposition A UDVT , by A Vr Dr 1 UTr (223) where Ur , Dr , and Vr are the matrices with the degenerated rows and columns deleted. A di erent way is this: There do always exist two matrices C n r and D r m of rank r, such that A CD. Using these matrices it holds that A DT (DDT ) 1 (CT C) 1 CT (224) See [3]. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 23

4 4 COMPLEX MATRICES Complex Matrices The complex scalar product r pq can be written as r p p q r p p q 4.1 (225) Complex Derivatives In order to di erentiate an expression f (z) with respect to a complex z, the Cauchy-Riemann equations have to be satisfied ([7]): df (z) @ (f (z)) @ (f (z)) i dz @ z @ z and df (z) dz or in a more compact form: i @ (f (z)) @ (f (z)) @ z @ z @f (z) @f (z) i . @ z @ z (226) (227) (228) A complex function that satisfies the Cauchy-Riemann equations for points in a region R is said yo be analytic in this region R. In general, expressions involving complex conjugate or conjugate transpose do not satisfy the Cauchy-Riema

CONTENTS CONTENTS Notation and Nomenclature A Matrix Aij Matrix indexed for some purpose Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not elementwise

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The square root of a matrix (if unique), not elementwise

A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not .

EMC Overview 43 Immunity Issues Can Exist Due To The Following Most of today’s electrical and electronic systems rely on active devices such as microprocessors and digital logic for: –Control of system functions. –User convenience / features. –Legislated system requirements (such as mobile telephone location reporting). With today’s vast networks for data communication there .