Matrix Calculations: Diagonalisation, Orthogonality, And .

2y ago
20 Views
2 Downloads
1.64 MB
53 Pages
Last View : 20d ago
Last Download : 3m ago
Upload by : Jewel Payne
Transcription

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenMatrix Calculations: Diagonalisation,Orthogonality, and ApplicationsA. KissingerInstitute for Computing and Information SciencesRadboud University NijmegenVersion: autumn 2018A. KissingerVersion: autumn 2018Matrix Calculations1 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenLast time Vectors look different in different bases, e.g. for: 1111B ,C ,1 112 we have: 1 0 SA. KissingerVersion: autumn 20181212! B 2 1 CMatrix Calculations2 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenLast time 11B ,1 1 11C ,12 We can transform bases using basis transformation matrices.Going to standard basis is easy (basis elements are columns): 1 11 1TB S 1 1TC S 1 2 .coming back means taking the inverse: 1 1 1 1TS B (TB S ) 2 1 1 2 1 1TS C (TC S ) 1 1A. KissingerVersion: autumn 2018Matrix Calculations3 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenLast time The change of basis of a vector is computed by applying thematrix. For example, changing from S to B is:v 0 TS B · v The change of basis for a matrix is computed by surroundingit with basis-change matrices. Changing from a matrix A in S to a matrix A0 in B is:A0 TS B · A · TB S (Memory aid: look at the first matrix after the equals sign tosee what basis transformation you are doing.)A. KissingerVersion: autumn 2018Matrix Calculations4 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University Nijmegen Many linear maps have their ‘own’ basis, their eigenbasis,which has the property that all basis elements v B do this:A · v λv λ is called an eigenvalue, v is called an eigenvector. Eigenvalues are computed by solving:det(A λI ) 0A. KissingerVersion: autumn 2018Matrix Calculations5 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenOutlineEigenvectors and diagonalisationInner products and orthogonalityWrapping upA. KissingerVersion: autumn 2018Matrix Calculations6 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenComputing eigenvectors For an n n matrix, the equation det(A λI ) 0 has nsolutions, which we’ll write as: λ1 , λ2 , . . . , λn (e.g. a 2 2 matrix involves solving a quadratic equation,which has 2 solutions λ1 and λ2 ) For each of these solutions, we get a homogeneous system:(A λ I ) ·v 0 {z i } imatrix Solving this homogeneous system gives us the associatedeigenvector vi for the eigenvalue λiA. KissingerVersion: autumn 2018Matrix Calculations8 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExample This matrix: 1 2A 0 1 Has characteristic polynomial: λ 1 2det λ2 10 λ 1 The equation λ2 1 0 has 2 solutions: λ1 1 andλ2 1.A. KissingerVersion: autumn 2018Matrix Calculations9 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExample For λ1 1, we get a homogeneous system:(A λ1 · I ) · v1 0 Computing (A (1) · I ): 1 21 00 2 (1) · 0 10 10 2 So, we need to find a non-zero solution for: 0 2· v1 00 2(just like in lecture 2) 0 This works: v1 1A. KissingerVersion: autumn 2018Matrix Calculations10 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExample For λ2 1, we get another homogeneous system:(A λ2 · I ) · v2 0 Computing (A ( 1) · I ): 1 21 02 2 ( 1) · 0 10 10 0 So, we need to find a non-zero solution for: 2 2· v2 00 0 This works:A. Kissingerv2 1 1Version: autumn 2018Matrix Calculations11 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExampleSo, for the matrixA, we computed 2 eigenvalue/eigenvector pairs:v1λ1 1, 0 1andλ2 1,A. KissingerVersion: autumn 2018v2 1 1Matrix Calculations12 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenTheoremIf the eigenvalues of a matrix A are all different, then theirassociated eigenvectors form a basis.Proof. We need to prove the vi are all linearly independent. Then suppose (forcontradiction) that v1 , . . . , vn are linearly dependent, i.e.:c1 v1 c2 v2 . . . cn vn 0for k non-zero coefficients. Then, using that they are eigvectors:A · (c1 v1 . . . cn vn ) 0Suppose cj 6 0, then subtract1λj λ1 c1 v1 . . . λn cn vn 0times 2nd equation from the 1st equation:c1 v1 c2 v2 . . . cn vn 1(λ1 c1 v1 . . . λn cn vn ) 0λjThis has k 1 non-zero coefficients (because all the λi ’s are distinct). Repeatuntil we have just 1 non-zero coefficient, and we have:cj vk 0 vk 0but eigenvectors are always non-zero, so this is a contradiction.A. KissingerVersion: autumn 2018Matrix Calculations13 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenChanging basis Once we have a basis of eigenvectors B {v1 , v2 , . . . , vn },translating to B gives us a diagonal matrix, whose diagonalentries are the eigenvalues: λ1 000 0 λ2 00 TS B ·A·TB S D where D 0 0 ··· 0 0 00 λn Going the other direction, we can always writediagonal matrix:A in terms of aA TB S · D · TS BA. KissingerVersion: autumn 2018Matrix Calculations14 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenDefinitionFor a matrix A with eigenvalues λ1 , . . . , λn and eigenvectorsB {v1 , . . . , vn }, decomposing A as: λ1 000 0 λ2 00 A TB S · 0 0 · · · 0 · TS B0 00 λnis called diagonalising the matrixA. KissingerVersion: autumn 2018A.Matrix Calculations15 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenSummary: diagonalising a matrix (study this slide!)We diagonalise a matrixA as follows:1Compute each eigenvalue λ1 , λ2 , . . . , λn by solving thecharacteristic polynomial2For each eigenvalue, compute the associated eigenvectorsolving the homogenious system (A λi I ) · vi 0.3Write downvibyA as the product of three matrices:A TB S · D · TS Bwhere: TB S has the eigenvectors v1 , . . . , vn (in order!) as itscolumnsD has the eigenvalues (in the same order!) down its diagonal,and zeroes everywhere else TS B is the inverse of TB S . A. KissingerVersion: autumn 2018Matrix Calculations16 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExample: political swingers, part I We take an extremely crude view on politics and distinguishonly left and right wing political supporters We study changes in political views, per year Suppose we observe, for each year: 80% of lefties remain lefties and 20% become righties 90% of righties remain righties, and 10% become leftiesQuestions . . . start with a population L 100, R 150, and compute thenumber of lefties and righties after one year; similarly, after 2 years, and 3 years, . . . We can represent these computations conveniently usingmatrix multiplication.A. KissingerVersion: autumn 2018Matrix Calculations17 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenPolitical swingers, part II So if we start with a population L 100, R 150, then afterone year we have: lefties: 0.8 · 100 0.1 · 150 80 15 95 righties: 0.2 · 100 0.9 · 150 20 135 155 If L 100 , then after one year we have:R150P 100·150 0.8 0.110095· 0.2 0.9150155 After two years we have: 950.8 0.19591.5P · 155 0.2 0.9 · 155 158.5A. KissingerVersion: autumn 2018Matrix Calculations18 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenPolitical swingers, part IVThe situation after two years is obtained as:!!!0.8 0.10.8 0.1LP ·P · ··0.2 0.90.2 0.9R {z}L!Rdo this multiplication!!first 0.66 0.170.34 0.83·LRThe situation after n years is described by the n-fold iteratedmatrix:P n P · P{z· · · P}n timesEtc. It looks like P 100 (or worse, limn P n ) is going to be a realpain to calculate. .or is it?A. KissingerVersion: autumn 2018Matrix Calculations19 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenDiagonal matrices Multiplying lots of matrices together is hard :( But multiplying diagonal matrices is easy! a 0 0 0w 0 0 0aw 0 0 b 0 0 0 x 0 0 0 bx 0 0 c 0 · 0 0 y 0 000 0 0 d0 0 0 z00 Strategy: first diagonalise00cy0 00 0 dzP:P TB S · D · TS BwhereD is diagonal Then multiply (and see what happens.)A. KissingerVersion: autumn 2018Matrix Calculations20 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenMultiplying diagonalised matrices SupposeP TB S · D · TS B , then:P · P TB S · D · TS B · TB S · D · TS B So: and: and so on:A. KissingerP · P TB S · D · D · TS BP · P · P TB S · D · D · D · TS BP n TB S · D n · TS BVersion: autumn 2018Matrix Calculations21 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenPolitical swingers re-revisited, part I Suppose we diagonalise the political transition matrix: 1 1 11 00.8 0.11 1··P 0.2 0 0.7 3 2 10.92 1{z} {z } {z } TB SDTS B Then, raising it to the 10th power is not so hard:P 10 A. Kissinger 10 1 11101·· 10 0.73 2 1 10 1 111101··2 10 0.7103 2 1 1 111101··2 10 0.0283 2 1 0.35 0.320.65 0.68 12Version: autumn 2018Matrix Calculations22 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University Nijmegen We can also compute: n 1 1 11 110nlim P lim··n n 2 10 0.7n3 2 1 1 1 11 11 0 ··2 10 03 2 1 1 1 1 3 2 2A. KissingerVersion: autumn 2018Matrix Calculations23 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenAnd more. Diagonalisation lets us do lots of things we can normally onlydo with numbers with matrices instead We already saw raising to a power: n λ1 000 0 λn 00 2 An TB S · 0 0 · · · 0 · TS B0 00 λNn We can also do other funky stuff, like take the square root ofa matrix: λ1 000 0λ2 00 ·TA TB S · 00· · · 0 S B000λnA. KissingerVersion: autumn 2018Matrix Calculations24 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenAnd more. Take the square root of a matrix: λ1 00 0λ2 0A TB S · 00···000 (always gives us a matrix whereA. KissingerVersion: autumn 2018 00 · TS B 0λn A · A A)Matrix Calculations25 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenAnd just because they are cool. Exponentiate a matrix: λ1e 0e A TB S · 000e λ20000···0 00 ·T0 S Be λn(e.g. to solve the Schrödinger equation in quantum mechanics) Take the logarithm a matrix: log(λ1 )00 0log(λ)02log(A) TB S · 00···000 00 ·T0 S Blog(λn )(e.g. to compute entropies of quantum states)A. KissingerVersion: autumn 2018Matrix Calculations26 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenApplications: data processing Problem: suppose we have a HUGE matrix, and we want toknow approximately what it looks like Solution: diagonalise it using its basis B of eigenvectors.thenthrow away ( set to zero) all the little eigenvalues: λ1 0 · · · 0 0λ1 0 · · · 00 0 λ2 0 0 λ2 00 0 . . . .000 . 0 λ3 0. . . 0.000 000 0 ··· 0 0 B0 0 · · · 0 λn B If there are only a few big λ’s, and lots of little λ’s, we getalmost the same matrix back This is the basic trick used in principle compent analysis (bigdata) and lossy data compressionA. KissingerVersion: autumn 2018Matrix Calculations27 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenLength of a vector Each vector v (x1 , . . . , xn ) Rn has a length (aka. norm),written as kv k This kv k is a non-negative real number: kv k R, kv k 0 Some special cases:v R, with kv k v v (x1 , x2 ) R2 and with Pythagoras:qand thuskv k x12 x22kv k2 x12 x22n 3: so v (x1 , x2 , x3 ) R3 and also with Pythagoras:qkv k2 x12 x22 x32and thuskv k x12 x22 x32 n 1: so n 2: so In general, forA. Kissingerv (x1 , . . . , xn ) Rn ,qkv k x12 x22 · · · xn2Version: autumn 2018Matrix Calculations29 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenDistance between points Assume now we have two vectors v , w Rn , written as:v (x1 , . . . , xn )w (y1 , . . . , yn ) What is the distance between the endpoints? commonly written as d(v , w ) again, d(v , w ) is a non-negative real For n 2,d(v , w ) q(x1 y1 )2 (x2 y2 )2 kv w k kw v k This will be used also for other n, so:d(v , w ) kv w kA. KissingerVersion: autumn 2018Matrix Calculations30 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenLength is fundamental Distance can be obtained from length of vectors Angles can also be obtained from length Both length of vectors and angles between vectors can bederived from the notion of inner productA. KissingerVersion: autumn 2018Matrix Calculations31 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenInner product definitionDefinitionFor vectors v (x1 , . . . , xn ), w (y1 , . . . , yn ) Rn define theirinner product as the real number:hv , w i x1 y1 · · · xn ynX xi yi1 i nNote: Length kv k can be expressed via inner product:kv k2 x12 · · · xn2 hv , v i,A. KissingerVersion: autumn 2018sokv k Matrix Calculationsphv , v i.32 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenProperties of the inner productv and w :hv , w i hw , v i1The inner product is symmetric in2It is linear in v :hv v 0 , w i hv , w i hv 0 , w ihav , w i ahv , w iw (by symmetry):hv , w w 0 i hv , w i hv , w 0 ihv , aw i ahv , w i.and hence also in3And it is positive definite:v 6 0A. KissingerVersion: autumn 2018 hv , v i 0Matrix Calculations33 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenInner products and angles, part IFor v w (1, 0), hv , w i 1.As we start to rotate w , hv , w i goes down until 0:hv , w i 1hv , w i 45hv , w i 35hv , w i 0.and then goes to 1:hv , w i 0hv , w i 35hv , w i 45hv , w i 1.then down to 0 again, then to 1, then repeats.A. KissingerVersion: autumn 2018Matrix Calculations34 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenCosinePlotting these numbers vs. the angle between the vectors, we get:It looks like hv , w i depends on the cosine of the angle betweenand w .A. KissingerVersion: autumn 2018Matrix Calculationsv35 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University Nijmegen In fact, if kv k kw k 1, it is true that hv , w i cos γ. For the general equation, we need to divide by their lengths:cos(γ) hv , w ikv k kw k Remember this equation!A. KissingerVersion: autumn 2018Matrix Calculations36 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenInner products and angles, part IIProof (sketch). For 2 any two vectors, we can make a triangle likethis:kv kγd(v , w ) : kv w kkw kThen, we apply the cosine rule from trig to get:cos(γ) a2 b 2 c 2kv k2 kw k2 kv w k2 2ab2kv k kw k.then after expanding the definition of k.k and some work we get:cos(γ) A. KissingerVersion: autumn 2018hv , w ikv k kw kMatrix Calculations37 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenExamples What is the angle between (1, 1) and ( 1, 1)?cos γ hv , w i 2 2 1kv kkw k22· 2 γ π What is the angle between (1, 0) and (1, 1)?cos γ 11hv , w i kv kkw k1· 22 γ π4γ π2 What is the angle between (1, 0) and (0, 1)?cos γ A. Kissingerhv , w i0 0kv kkw kkv kkw kVersion: autumn 2018 Matrix Calculations38 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenOrthogonalityDefinitionTwo vectors v , w are called orthogonal if hv , w i 0. This iswritten as v w .Explanation: orthogonality means that the cosine of the anglebetween the two vectors is 0; hence they are perpendicular.ExampleWhich vectors (x, y ) R2 are orthogonal to (1, 1)?Examples, are (1, 1) or ( 1, 1), or more generally (x, x).This follows from an easy computation:h(x, y ), (1, 1)i 0 x y 0 y x.A. KissingerVersion: autumn 2018Matrix Calculations39 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenOrthogonality and independenceLemmaCall a set {v1 , . . . , vn } of non-zero vectors orthogonal if everypair of different vectors is orthogonal.1orthogonal vectors are always independent,2independent vectors are not always orthogonal.Proof: The second point is easy: (1, 1) and (1, 0) areindependent, but not orthogonalA. KissingerVersion: autumn 2018Matrix Calculations40 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenOrthogonality and independence (cntd)(Orthogonality Independence): assume {v1 , . . . , vn } isorthogonal and a1 v1 · · · an vn 0. Then for each i n:0 h0, vi i ha1 v1 · · · an vn , vi i ha1 v1 , vi i · · · han vn , vi i a1 hv1 , vi i · · · an hvn , vi i ai hvi , vi isince hvj , vi i 0 for j 6 iBut since vi 6 0 we have hvi , vi i 6 0, and thus ai 0.This holds for each i, so a1 · · · an 0, and we have provenindependence.A. KissingerVersion: autumn 2018Matrix Calculations41 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenOrthogonal and orthonormal basesDefinitionA basis B {v1 , . . . , vn } of a vector space with an inner product iscalled:12orthogonal if B is an orthogonal set: hvi , vj i 0 if i 6 jorthonormal if it is orthogonal and hvi , vi i kvi k 1, foreach iExampleThe standard basis (1, 0, . . . , 0), (0, 1, 0, . . . , 0), · · · , (0, · · · , 0, 1) isan orthonormal basis of Rn .A. KissingerVersion: autumn 2018Matrix Calculations42 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenFrom independence to orthogonality Not every basis is an orthonormal basis:Orthonormal basis ks 3/Basis But, by taking linear linear combinations of basis vectors, wecan transform a basis into a (better) orthonormal basis:B {v1 , . . . , vn } 7 B 0 {w1 , . . . , wn } Making basis vectors normalised is easy:vi7 wi : kv1 k vii Making vectors orthogonal is also always possible, using aprocedure called Gram-Schmidt orthogonalisation.A. KissingerVersion: autumn 2018Matrix Calculations43 / 56

Eigenvectors and diagonalisationInner products and orthogonalityWrapping upRadboud University NijmegenIn summary The inner product g

Eigenvectors and diagonalisation Inner products and orthogonality Wrapping up Radboud University Nijmegen Last

Related Documents:

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The square root of a matrix (if unique), not elementwise

CONTENTS CONTENTS Notation and Nomenclature A Matrix Aij Matrix indexed for some purpose Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not elementwise

A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not .

Diagonalization of a PH matrix Proposition : Let H(z) be a PH matrix, does there exist U(z) a PU matrix such that : Ue(z)H(z)U(z) (z) with (z) a Laurent polynomial diagonal matrix? (an hermitian matrix is diagonalizable by a unitary matrix) not always true (see example) if exist, "eigen

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The sq

of a matrix Matrix Algebra: matrix operations, inverses, matrix equations Optimization: graphical method of linear programming, simplex method of linear programming Orthogonality: dot product and inner product spaces, Orthogonality, Gram-Schmidt process, least squares, inn

Further Maths Matrix Summary 1 Further Maths Matrix Summary A matrix is a rectangular array of numbers arranged in rows and columns. The numbers in a matrix are called the elements of the matrix. The order of a matrix is the number of rows and columns in the matrix. Example 1 [is a ] 3 by 2 or matrix as it has 3 rows and 2 columns. Matrices are .

the coronavirus outbreak - Identify surfaces that are frequently touched and by many people (often common areas), eg handrails, door handles, vehicle door handles (inside and outside), shared equipment etc and specify the frequency and level of cleaning and by whom - Train people how to put on and remove personal protective equipment (PPE) that is used for normal work hazards and how to keep .