A Second Course In Linear Algebra - University Of Kansas

2y ago
13 Views
3 Downloads
1.73 MB
168 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Duke Fulford
Transcription

A Second Course in Linear Algebra

A Second Course in Linear AlgebraRobert A. BeezerUniversity of Puget SoundDRAFT February 27, 2017 DRAFT

2004–2014 Robert A. BeezerPermission is granted to copy, distribute and/or modify this document under theterms of the GNU Free Documentation License, Version 1.2 or any later versionpublished by the Free Software Foundation; with no Invariant Sections, no FrontCover Texts, and no Back-Cover Texts. A copy of the license is included in theappendix entitled “GNU Free Documentation License.”

AcknowledgementsThe writing of a substantial portion of this text was supported by a CourseRelease Unit from the University Enrichment Committee at the University ofPuget Sound for the Spring 2014 semester, in addition to previous sabbaticalleaves.Development of systems used to produce this textbook as an open source texthave been supported by grants from the National Science Foundation (DUE1022574) and the Shuttleworth Foundation (Flash Grant, 2013).v

vi

ContentsAcknowledgements1 on . . . . . . . . . . .Direct Sums . . . . . . . . . . .Orthogonal Complements . . .Invariant Subspaces . . . . . . .Reflectors . . . . . . . . . . . .Projectors . . . . . . . . . . . .Normal Matrices . . . . . . . .Positive Semi-Definite Matrices.11281121243030.33334248553 Canonical Forms3.1 Generalized Eigenspaces . . . . . . . . . . . . . . . . . . . . . . .3.2 Nilpotent Linear Transformations . . . . . . . . . . . . . . . . . .3.3 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . .57587387.2 Matrix Decompositions2.1 LU (Triangular) Decomposition . .2.2 QR (Gram-Schmidt) Decomposition2.3 Singular Value Decomposition . . .2.4 Cholesky Decomposition . . . . . .4 Applications1134.1 Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134.2 Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194.3 Linear Recurrence Relations . . . . . . . . . . . . . . . . . . . . . 1225 Topics1455.1 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . 1455.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149A GNU Free Documentation Licensevii151

viiiCONTENTS

Chapter 1The Fundamentals1.1IntroductionThis book is about advanced topics in linear algebra. So we presume you havesome experience with matrix algebra, vector spaces (possibly abstract ones),eigenvalues, linear transformations, and matrix representations of linear transformations. All of this material can be found in A First Course in Linear Algebra,which we will reference frequently.Our approach is mathematical, which means we include proofs of our results.However, we are also practical, and will not always be as general as we could be.For example, we will stick to a single inner product throughout (the sesquilinearone that is most useful when employing complex numbers). We will sometimesbe careful about our field of scalars, but will not dwell on the distinctions peculiar to the real numbers (versus the algebraically closed complex numbers). Thisis not a course in numerical linear algebra, but much of what we do providesthe mathematical underpinninngs of that topic, so this could be a very usefulresource for study in that area. We will make mention of algorithmic performance, relying on Trefethen and Bau’s excellent Numerical Linear Algebra fordetails.Many topics we consider are motivated by trying to find simpler versions ofmatrices. Here “simpler” can be taken to mean many zero entries. Barring azero entry, then maybe an entry equal to one is second-best. An overall formthat is much like a diagonal matrix is also desirable, since diagonal matrices aresimple to work with. (forward referenc eto exercise). A familiar example mayhelp to make these ideas more precise.Example 1.1.1 (Reduced Row-Echelon Form as a Factorization). Given anm n matrix, A, we know that its reduced row-echelon form is unique (TheoremRREFU). We also know that we can accomplish row-operations by multiplying Aon the left by a (nonsingular) elementary matrix (Subsection DM.EM). Supposewe perform repeated row-operations to transform A into a matrix in reducedrow-echelon form, B. Then the product of the elementary matrices is a square1

2CHAPTER 1. THE FUNDAMENTALSnonsingular matrix, J such thatB JAor equivalentlyA J1B.We call the second version a factorization, or matrix decomposition,of A (Though some might use the same terms for the first version, perhapssaying it is a factorization of B). The pieces of this decomposition have certainproperties. The matrix J 1 is a nonsingular matrix of size m. The matrix B hasan abundance of zero entries, and some strategically placed “leading ones” whichsignify the pivot columns. The exact structure of B is described by DefinitionRREF and Theorem RREF tells us that we can accomplish this decompositiongiven any matrix A.If A is not of full rank, then there are many possibilities for the matrix J, eventhough B is unique. However, results on extended echelon form (SubsectionFS.PEEF suggest a choice for J that is unambiguous. We say that choice iscanonical. This example gives the following theorem, where we have changedthe notation slightly.Again, many of the topics in this book will have a flavor similar to theprevious example and theorem. However, we will often need to limit the possibilities for the original matrix (it may need to be square, or its eigenvaluesmay need certain properties). We may get more specific information about thecomponents of the factorization, or we may get less. We will also be interestedin obtaining canonical forms of matrices. You can view orthonormal diagonalization (Section OD) as a good example of another matrix decomposition,and we will cover it again in some detail in Section hhsection on orthonormaldiagonalization/Schurii.1.21.2.1Direct SumsDirect SumsSome of the more advanced ideas in linear algebra are closely related to decomposing (Proof Technique DC) vector spaces into direct sums of subspaces. Adirect sum is a short-hand way to describe the relationship between a vectorspace and two, or more, of its subspaces. As we will use it, it is not a way toconstruct new vector spaces from others.Definition 1.2.1 (Direct Sum). Suppose that V is a vector space with twosubspaces U and W such that for every v 2 V ,1. There exists vectors u 2 U , w 2 W such that v u w2. If v u1 w1 and v u2 w2 where u1 , u2 2 U , w1 , w2 2 W thenu1 u2 and w1 w2 .

1.2. DIRECT SUMS3Then V is the direct sum of U and W and we write V UW.Informally, when we say V is the direct sum of the subspaces U and W , weare saying that each vector of V can always be expressed as the sum of a vectorfrom U and a vector from W , and this expression can only be accomplished inone way (i.e. uniquely). This statement should begin to feel something like ourdefinitions of nonsingular matrices (Definition NM) and linear independence(Definition LI). It should not be hard to imagine the natural extension of thisdefinition to the case of more than two subspaces. Could you provide a carefuldefinition of V U1 U2 U3 . . . Um (Exercise PD.M50)?Example 1.2.2 (Simple direct sum). In C3 , define2 32 331454v1 2v2 2 551232v3 4 1 52Then C3 h{v1 , v2 }i h{v3 }i. This statement derives from the fact that B {v1 , v2 , v3 } is basis for C3 . The spanning property of B yields the decompositionof any vector into a sum of vectors from the two subspaces, and the linearindependence of B yields the uniqueness of the decomposition. We will illustratethese claims withnumerical example.2a3104Choose v 1 5. Then6v 2v1 ( 2)v2 1v3 (2v1 ( 2)v2 ) (1v3 )where we have added parentheses for emphasis. Obviously 1v3 2 h{v3 }i, while2v1 ( 2)v2 2 h{v1 , v2 }i. Theorem VRRB provides the uniqueness of thescalars in these linear combinations.Example SDS is easy to generalize into a theorem.Theorem 1.2.3 (Direct Sum From a Basis). Suppose that V is a vector spacewith a basis B {v1 , v2 , v3 , . . . , vn } and m n. DefineU h{v1 , v2 , v3 , . . . , vm }iThen V UW h{vm 1 , vm 2 , vm 3 , . . . , vn }iW.Proof. Choose any vector v 2 V . Then by Theorem VRRB there are uniquescalars, a1 , a2 , a3 , . . . , an such thatv a1 v 1 a2 v 2 a3 v 3 · · · an v n (a1 v1 a2 v2 a3 v3 · · · am vm ) (am 1 vm 1 am 2 vm 2 am 3 vm 3 · · · an vn ) u w

4CHAPTER 1. THE FUNDAMENTALSwhere we have implicitly defined u and w in the last line. It should be clear thatu 2 U , and similarly, w 2 W (and not simply by the choice of their names).Suppose we had another decomposition of v, say v u w . Then wecould write u as a linear combination of v1 through vm , say using scalarsb1 , b2 , b3 , . . . , bm . And we could write w as a linear combination of vm 1through vn , say using scalars c1 , c2 , c3 , . . . , cn m . These two collections ofscalars would then together give a linear combination of v1 through vn thatequals v. By the uniqueness of a1 , a2 , a3 , . . . , an , ai bi for 1 i m andam i ci for 1 i n m. From the equality of these scalars we concludethat u u and w w . So with both conditions of Definition 1.2.1 fulfilledwe see that V U W .Given one subspace of a vector space, we can always find another subspacethat will pair with the first to form a direct sum. The main idea of this theorem,and its proof, is the idea of extending a linearly independent subset into a basiswith repeated applications of Theorem ELIS.Theorem 1.2.4 (Direct Sum From One Subspace). Suppose that U is a subspaceof the vector space V . Then there exists a subspace W of V such that V U W .Proof. If U V , then choose W {0}. Otherwise, choose a basis B {v1 , v2 , v3 , . . . , vm } for U . Then since B is a linearly independent set, Theorem ELIS tells us there is a vector vm 1 in V , but not in U , such that B [{vm 1 }is linearly independent. Define the subspace U1 hB [ {vm 1 }i.We can repeat this procedure, in the case were U1 6 V , creating a new vectorvm 2 in V , but not in U1 , and a new subspace U2 hB [ {vm 1 , vm 2 }i. If wecontinue repeating this procedure, eventually, Uk V for some k, and we can nolonger apply Theorem ELIS. No matter, in this case B[{vm 1 , vm 2 , . . . , vm k }is a linearly independent set that spans V , i.e. a basis for V .Define W h{vm 1 , vm 2 , . . . , vm k }i. We now are exactly in position toapply Theorem 1.2.3 and see that V U W .There are several different ways to define a direct sum. Our next two theorems give equivalences (Proof Technique E) for direct sums, and thereforecould have been employed as definitions. The first should further cement thenotion that a direct sum has some connection with linear independence.Theorem 1.2.5 (Direct Sums and Zero Vectors). Suppose U and W are subspaces of the vector space V . Then V U W if and only if1. For every v 2 V , there exists vectors u 2 U , w 2 W such that v u w.2. Whenever 0 u w with u 2 U , w 2 W then u w 0.Proof. The first condition is identical in the definition and the theorem, so weonly need to establish the equivalence of the second conditions.

1.2. DIRECT SUMS5Assume that V U W , according to Definition 1.2.1. By Property Z,0 2 V and 0 0 0. If we also assume that 0 u w, then the uniquenessof the decomposition gives u 0 and w 0.( Suppose that v 2 V , v u1 w1 and v u2 w2 where u1 , u2 2 U ,w1 , w2 2 W . Then0 v v (u1 w1 ) (u2 w2 ) (u1 u2 ) (w1 w2 )By Property AC, u1 u2 2 U and w1 w2 2 W . We can now apply ourhypothesis, the second statement of the theorem, to conclude thatu1u2 0u1 u2w1w2 0w1 w2which establishes the uniqueness needed for the second condition of the definition.Our second equivalence lends further credence to calling a direct sum a decomposition. The two subspaces of a direct sum have no (nontrivial) elementsin common.Theorem 1.2.6 (Direct Sums and Zero Intersection). Suppose U and W aresubspaces of the vector space V . Then V U W if and only if1. For every v 2 V , there exists vectors u 2 U , w 2 W such that v u w.2. U \ W {0}.Proof. The first condition is identical in the definition and the theorem, so weonly need to establish the equivalence of the second conditions.Assume that V U W , according to Definition 1.2.1. By Property Z andDefinition SI, {0} U \ W . To establish the opposite inclusion, suppose thatx 2 U \ W . Then, since x is an element of both U and W , we can write twodecompositions of x as a vector from U plus a vector from W ,x x 0x 0 xBy the uniqueness of the decomposition, we see (twice) that x 0 andU \ W {0}. Applying Definition SE, we have U \ W {0}.( Assume that U \ W {0}. And assume further that v 2 V is suchthat v u1 w1 and v u2 w2 where u1 , u2 2 U , w1 , w2 2 W . Definex u1 u2 . then by Property AC, x 2 U . Alsox u1 (v (v w2u2w1 ) (vv) (w1w1w2 )w2 )

6CHAPTER 1. THE FUNDAMENTALSSo x 2 W by Property AC. Thus, x 2 U \ W {0} (Definition SI). Sox 0 andu1u2 0u1 u2w2w1 0w2 w1yielding the desired uniqueness of the second condition of the definition.If the statement of Theorem 1.2.5 did not remind you of linear independence,the next theorem should establish the connection.Theorem 1.2.7 (Direct Sums and Linear Independence). Suppose U and Ware subspaces of the vector space V with V U W . Suppose that R is alinearly independent subset of U and S is a linearly independent subset of W .Then R [ S is a linearly independent subset of V .Proof. Let R {u1 , u2 , u3 , . . . , uk } and S {w1 , w2 , w3 , . . . , w }. Beginwith a relation of linear dependence (Definition RLD) on the set R [ S usingscalars a1 , a2 , a3 , . . . , ak and b1 , b2 , b3 , . . . , b . Then,0 a1 u1 a2 u2 a3 u3 · · · ak uk b1 w1 b2 w2 b3 w3 · · · b w (a1 u1 a2 u2 a3 u3 · · · ak uk ) (b1 w1 b2 w2 b3 w3 · · · b w ) u wwhere we have made an implicit definition of the vectors u 2 U , w 2 W .Applying Theorem 1.2.5 we conclude thatu a1 u1 a2 u2 a3 u3 · · · ak uk 0w b1 w 1 b2 w 2 b3 w 3 · · · b w 0Now the linear independence of R and S (individually) yieldsa1 a2 a3 · · · ak 0b1 b2 b3 · · · b 0Forced to acknowledge that only a trivial linear combination yields the zerovector, Definition LI says the set R [ S is linearly independent in V .Our last theorem in this collection will go some ways towards explaining theword “sum” in the moniker “direct sum”.Theorem 1.2.8 (Direct Sums and Dimension). Suppose U and W are subspacesof the vector space V with V U W . Then dim (V ) dim (U ) dim (W ).Proof. We will establish this equality of positive integers with two inequalities.We will need a basis of U (call it B) and a basis of W (call it C).First, note that B and C have sizes equal to the dimensions of the respectivesubspaces. The union of these two linearly independent sets, B [ C will belinearly independent in V by Theorem 1.2.7. Further, the two bases have no

1.2. DIRECT SUMS7vectors in common by Theorem 1.2.6, since B \ C {0} and the zero vector isnever an element of a linearly independent set (Exercise LI.T10). So the sizeof the union is exactly the sum of the dimensions of U and W . By TheoremG the size of B [ C cannot exceed the dimension of V without being linearlydependent. These observations give us dim (U ) dim (W ) dim (V ).Grab any vector v 2 V . Then by Theorem 1.2.6 we can write v u wwith u 2 U and w 2 W . Individually, we can write u as a linear combination ofthe basis elements in B, and similarly, we can write w as a linear combinationof the basis elements in C, since the bases are spanning sets for their respectivesubspaces. These two sets of scalars will provide a linear combination of all ofthe vectors in B [ C which will equal v. The upshot of this is that B [ C isa spanning set for V . By Theorem G, the size of B [ C cannot be smallerthan the dimension of V without failing to span V . These observations give usdim (U ) dim (W ) dim (V ).There is a certain appealling symmetry in the previous proof, where bothlinear independence and spanning properties of the bases are used, both of thefirst two conclusions of Theorem G are employed, and we have quoted both ofthe two conditions of Theorem 1.2.6.One final theorem tells us that we can successively decompose direct sumsinto sums of smaller and smaller subspaces.Theorem 1.2.9 (Repeated Direct Sums). Suppose V is a vector space withsubspaces U and W with V U W . Suppose that X and Y are subspaces ofW with W X Y . Then V U X Y .Proof. Suppose that v 2 V . Then due to V U W , there exist vectors u 2 Uand w 2 W such that v u w. Due to W X Y , there exist vectors x 2 Xand y 2 Y such that w x y. All together,v u w u x ywhich would be the first condition of a definition of a 3-way direct product.Now consider the uniqueness. Suppose thatv u1 x1 y1v u2 x 2 y 2Because x1 y1 2 W , x2 y2 2 W , and V Uu1 u2W , we conclude thatx1 y1 x2 y2From the second equality, an application of W X Y yields the conclusionsx1 x2 and y1 y2 . This establishes the uniqueness of the decomposition of vinto a sum of vectors from U , X and Y .Remember that when we write V UW there always needs to be a“superspace,” in this case V . The statement U W is meaningless. WritingV U W is simply a shorthand for a somewhat complicated relationship

8CHAPTER 1. THE FUNDAMENTALSbetween V , U and W , as described in the two conditions of Definition 1.2.1,or Theorem 1.2.5, or Theorem 1.2.6. Theorem 1.2.3 and Theorem 1.2.4 givesus sure-fire ways to build direct sums, while Theorem 1.2.7, Theorem 1.2.8 andTheorem 1.2.9 tell us interesting properties of direct sums.This subsection has been long on theorems and short on examples. If we wereto use the term “lemma” we might have chosen to label some of these results assuch, since they will be important tools in other proofs, but may not have muchinterest on their own (see Proof Technique LC). We will be referencing theseresults heavily in later sections, and will remind you then to come back for asecond look.1.3Orthogonal ComplementsTheorem hhabove on repeated sumsii mentions repeated sums, which are ofinterest. However, when we begin with a vector space V and a single subspaceW , we can ask about the existence of another subspace, W , such that V U W . The answer is that such a W always exists, and we then refer to it as acomplement of U .Definition 1.3.1 (Subspace Complement). Suppose that V is a vector spacewith a subspace U . If W is a subspace such that V U W , then W is thecomplement of V .Every subspace has a complement, and generally it is not unique.Lemma 1.3.2 (Every Subspace has a Complement). Suppose that V is a vectorspace with a subspace U . Then there exists a subspace W such that V U W ,so W is the complement of V .Proof. Suppose that dim (V ) n and dim (U ) k, and let B {u1 , u2 , u3 , . . . , uk }be a basis of U . With n k applications of Theorem ELIS we obtain vectorsv1 , v2 , v3 , . . . , vn k that succesively create bases Bi {u1 , u2 , u3 , . . . , uk , v1 , v2 , v3 , . . . , vi },0 i n k for subspaces U U0 , U1 , . . . , Un k V , where dim (Ui ) k i.Define W h{v1 , v2 , v3 , . . . , vn k }i. Since {u1 , u2 , u3 , . . . , uk , v1 , v2 , v3 , . . . , vi }is a basis for V and {u1 , u2 , u3 , . . . , uk } is a basis for U , we can apply Theorem hhDirect Sum From a Basis (above)ii to see that V U W , so W is thecomplement of V . (Compare with hhDirect Sum From One Subspace (above)ii,which has identical content, but a different write-up.)The freedom given when we “extend” a linearly independent set (or basis) tocreate a basis for the complement means that we can create a complement inmany ways, so it is not unique.Exercise 1.3.3. Consider the subspace U of V C3 ,*8 2 1 3 2 1 39 4 65 , 4 55U .:;87

1.3. ORTHOGONAL COMPLEMENTS9Create two different complements of U , being sure to prove that your complements are unequal (and not simply have unequal bases). Before reading ahead,can you think of an ideal (or “canonical”) choice for the complement?Exercise 1.3.4. Consider the subspace U of V C5 ,8 2 3 2 39 1 1 6767 * 44 6 7 6 7 6 7 6 7U .6 27 , 6 17 6 7 6 7 4 6 5 4 4 5 : 53 ;Create a complement of U . (If you have read ahead, do not create an orthogonalcomplement for this exercise.)With an inner product, and a notion of orthogonality, we can define a canonical, and useful, complement for every subspace.Definition 1.3.5 (Orthogonal Complement). Suppose that V is a vector spacewith a subspace U . Then the orthogonal complement of U (relative to V ) isU ? {v 2 V hv, ui 0 for every u 2 U } .A matrix formulation of the orthogonal complement will help us establishthat the moniker “complement” is deserved.Theorem 1.3.6 (Orthogonal Complement as a Null Space). Suppose that Vis a vector space with a subspace U . Let A be a matrix whose columns are aspanning set for U . Then U ? N (A ).Proof. Membership in the orthogonal complement requires a vector to be orthogonal to every vector of U . However, because of the linearity of the innerproduct (Theorem IPVA, Theorem IPSM), it is equivalent to require that avector be orthogonal to each member of a spanning set for U . So membershipin the orthogonal complement is equivalent to being orthogonal to each columnof A. We obtain the desired set equality from the equivalences,v 2 U ? () v A 0 () A v 0 () v 2 N (A ) .Theorem 1.3.7 (Orthogonal Complement Decomposition). Suppose that V isa vector space with a subspace U . Then V U U ? .Proof. We first establish that U \ U ? {0}. Suppose u 2 U and u 2 U ? . Thenhu, ui 0 and by Theorem PIP we conclude that u 0.We now show that an arbitrary vector v can be written as a sum of vectorsfrom U and U ? . Without loss of generality, we can assume we have an orthonormal basis for U , for if not, we can apply the Gram-Schmidt process to any basis of

10CHAPTER 1. THE FUNDAMENTALSU to create an orthogonal spanning set, whose individual vectors can be scaled tohave norm one (Theorem GSP). Denote this basis as B {u1 , u2 , u3 , . . . , uk }.Define the vector v1 as a linear combination of the vectors of B, so v1 2 U .v1 kXi 1hui , vi ui .Define v2 v v1 , so trivially by construction, v v1 v2 . It remainsto show that v2 2 U ? . We repeatedly use properties of the inner product.This construction and proof may remind you of the Gram-Schmidt process. For1 j k,hv2 , uj i hv, uj i hv, uj i hv, uj i hv, uj i hv, uj i 0hv1 , uj ikXi 1kXi 1hhui , vi ui , uj ihui , vi hui , uj ihuj , vi huj , uj ihv, uj iWe have fulfilled the hypotheses of Theorem 1.2.5 and so can say V UU .?Theorem 1.3.7 gives us a canonical choice of a complementary subspace,which has useful orthogonality properties. It also allows us to decompose anyvector (uniquely) into an element of a subspace, plus an orthogonal vector. Thismight remind you in some ways of “resolving a vector into compoments” if youhave studied physics some.Given a matrix, we get a natural vector space decomposition.Corollary 1.3.8 (Matrix Subspace Decomposition). Suppose that A is an m nmatrix. ThenCm C(A) C(A)? C(A) N (A ) .Proof. Theorem 1.3.7 provides the first equality and Theorem 1.3.6 gives thesecond.Exercise 1.3.9. Compute the orthogonal complement of the subspace U C3 .*8 2 1 3 2339 4 15 , 415U :;53

1.4. INVARIANT SUBSPACES11Solution. Form the matrix A, whose columns are the two basis vectors givenfor U and compute the null space N (A ) by row-reducing the matrix. (Theorem 1.3.6) 11 5 RREF 1 0 2 A !3 1 30 13So*8 2 239 435U ? N (A ) :;1Exercise 1.3.10. Compute the orthogonal complements of the two subspacesfrom Exercises 1.3.3 and 1.3.4. For the subspace of C5 verify that your firstcomplement was not the orthogonal complement (or return to the exercise andfind a complement that is not orthogonal).1.41.4.1Invariant SubspacesInvariant SubspacesDefinition 1.4.1 (Invariant Subspace). Suppose that T : V ! V is a lineartransformation and W is a subspace of V . Suppose further that T (w) 2 W forevery w 2 W . Then W is an invariant subspace of V relative to T .We do not have any special notation for an invariant subspace, so it is important to recognize that an invariant subspace is always relative to both asuperspace (V ) and a linear transformation (T ), which will sometimes not bementioned, yet will be clear from the context. Note also that the linear transformation involved must have an equal domain and codomain — the definitionwould not make much sense if our outputs were not of the same type as ourinputs.As is our habit, we begin with an example that demonstrates the existence ofinvariant subspaces, while leaving other questions unanswered for the moment.We will return later to understand how this example was constructed, but fornow, just understand how we check the existence of the invariant subspaces.Example 1.4.2 (Two invariant subspaces). Consider the linear transformationT : C4 ! C4 defined by T (x) Ax where A is given by28 66 8 14A 6411381510323918 770 511

12CHAPTER 1. THE FUNDAMENTALSDefine (with zero motivation),2376 277w1 643502316 277w2 64051and set W h{w1 , w2 }i. We verify that W is an invariant subspace of C4 withrespect to T . By the definition of W , any vector chosen from W can be writtenas a linear combination of w1 and w2 . Suppose that w 2 W , and then checkthe details of the following verification,T (w) T (a1 w1 a2 w2 ) a1 T (w1 ) a2 T (w2 )2 32 3156 276 2776 7 a1 64 0 5 a2 4 3 512 a1 w2 a2 (( 1)w1 2w2 ) ( a2 )w1 (a1 2a2 )w2 2 WSo, by Definition IS, W is an invariant subspace of C4 relative to T .In an entirely similar manner we construct another invariant subspace ofT .With zero motivation, define2 32 3306 176 1777x1 6x2 641540501and set X h{x1 , x2 }i. We verify that X is an invariant subspace of C4 withrespect to T . By the definition of X, any vector chosen from X can be writtenas a linear combination of x1 and x2 . Suppose that x 2 X, and then check thedetails of the following verification,T (x) T (b1 x1 b2 x2 ) b1 T (x1 ) b2 T (x2 )2 32 33360764776 7 b1 64 1 5 b2 4 1 513 b1 (( 1)x1 x2 ) b2 (( 1)x1 ( 3)x2 ) ( b1 b2 )x1 (b1 3b2 )x2 2 X

1.4. INVARIANT SUBSPACES13So, by Definition IS, X is an invariant subspace of C4 relative to T .There is a bit of magic in each of these verifications where the two outputsof T happen to equal linear combinations of the two inputs. But this is theessential nature of an invariant subspace. We’ll have a peek under the hoodlater in Example 3.1.8, and it will not look so magical after all.Verify that B {w1 , w2 , x1 , x2 } is linearly independent, and hence a basisof C4 . Splitting this basis in half, Theorem 1.2.3 tells us that C4 W X. Tosee exactly why a decomposition of a vector space into a direct sum of invariantsubspaces might be interesting work Exercise 1.4.3 now.Exercise 1.4.3. Construct a matrix representation of the linear transformationT of Exercise 1.4.2 relative to the basis formed as the union of the bases of theTtwo invariant subspaces, MB,B. Comment on your observations, perhaps aftercomputing a few powers of the matrix representation (which represent repeatedcompositions of T with itself). Hmmmmmm.Solution. Our basis is82 3 27 6 7 627 6B {w1 , w2 , x1 , x2 } 64 3 5, 4 :031277,0512336 176 7,4150390 6 17 6 74 0 5 ;12Now we perform the necessary computions for the matrix representation ofT relative to B02B66 B (T (w1 )) B B@402B66 B (T (w2 )) B B@402B66 B (T (x1 )) B B@402312 3107C6727C617 ((0)w (1)w) B124050 5A10312 3517C6727C627 ((1)w (2)w) B1240535A20312 3307C670 7C607 ((1)x (1)x) B124 1515A131123B6 4 7C66 7C B (( 1)x1 ( 3)x2 ) 6 B (T (x2 )) B B@4 15A4330077153

14CHAPTER 1. THE FUNDAMENTALSApplying Definition MR, we have20161 2TMB,B 640 00 0001130077153The interesting feature of this representation is the two 2 2 blocks on thediagonal that arise from the decomposition of C4 into a direct sum of invariantsubspaces. Or maybe the interesting feature of this matrix is the two 2 2submatrices in the “other” corners that are all zero. You can decide.Exercise 1.4.4. Prove that the subspaces U, V C5 are invariant with respectto the linear transformation R : C5 ! C5 defined by R (x) Bx.23447346 206 1061856 10 76767B 6 10697 67207674 1170964 12 5319316 182 6* 66U 6 6 4 :31077717 ,70502666664390 177 7177 1 5 0 ;82 6* 66V 6 6 4 :31177717 ,7150B matrix ( QQ , [[4 , 47 , 3 , -46 , 20] ,[10 , 61 , 8 , -56 , 10] ,[ -10 , -69 , -7 , 67 , -20] ,[11 , 70 , 9 , -64 , 12] ,[3 , 19 , 3 , -16 , 1]])266666433377727 ,74522666664392 377 7277 3 5 1 ;Prove that the union of U and V is a basis of C5 , and then provide a matrixrepresentation of R relative to this basis.Example 1.4.2 and Exercise 1.4.4 are a bit mysterious at this stage. Do weknow any other examples of invariant subspaces? Yes, as it turns out, we havealready seen quite a few. We will give some specific examples, and for moregeneral situations, describe broad classes of invariant subspaces by theorems.First up is eigenspaces.Theorem 1.4.5 (Eigenspaces are Invariant Subspaces). Suppose that T : V ! Vis a linear transformation with eigenvalue and associated eigenspace ET ( ). LetW be any subspace of ET ( ). Then W is an invariant subspace of V relative toT.

1.4. INVARIANT SUBSPACES15Proof. Choose w 2 W . ThenT (w) w 2 W.So by Definition 1.4.1, W is an invariant subspace of V relative to T .Theorem 1.4.5 is general enough to determine that an entire eigenspace is

This book is about advanced topics in linear algebra. So we presume you have some experience with matrix algebra, vector spaces (possibly abstract ones), eigenvalues, linear transformations, and matrix representations of linear trans-formations. All of this material can be found inA First Course in Linear Algebra, which we will reference .

Related Documents:

SKF Linear Motion linear rail INA Linear SKF Linear Motion Star Linear Thomson Linear linear sysTems Bishop-Wisecarver INA Linear Pacific Bearing Thomson Linear mecHanical acTuaTors Duff-Norton Joyce/Dayton Nook Industries Precision mecHanical comPonenTs PIC Design Stock Drive Product

mx b a linear function. Definition of Linear Function A linear function f is any function of the form y f(x) mx b where m and b are constants. Example 2 Linear Functions Which of the following functions are linear? a. y 0.5x 12 b. 5y 2x 10 c. y 1/x 2 d. y x2 Solution: a. This is a linear function. The slope is m 0.5 and .

Sep 25, 2007 · A linear program is an optimization problem where all involved functions are linear in x; in particular, all the constraints are linear inequalities and equalities. Linear programming is the subject of studying and solving linear programs. Linear programming was born during the second World

For each of the following PDEs, state its order and whether it is linear or non-linear. If it is linear, also state whether it is homogeneous or nonhomo-geneous: (a) uu x x2u yyy sinx 0: (b) u x ex 2u y 0: (c) u tt (siny)u yy etcosy 0: Solution. (a) Order 3, non-linear. (b) Order 1, linear, homogeneous. (c) Order 2, linear, non .

Recall that a second order equation is called nonlinear if it is not linear.) is called linear homogeneous if and non-homogeneous if . E. XAMPLE . 8.1.2 . 1. The equation . is a second order equation which is nonlinear. 2. is an example of a linear second order equation. 3. is a non-homogeneous linear second order equation. mywbut.com 1

Second-Order Linear Equations - Spring System Intro - 1 Second-Order Linear Equations - Spring System So far we have seen examples of rst-order DEs, or equations with rst derivatives of some unknown function. From here on in the course, we will study di erential equations with second or higher derivatives. One classic source of di erential .

Feeny Math Resources Linear Functions Linear Functions Linear Functions Linear Functions Linear Functions Which of the following is a solution to the linear function in the graph? A. (1,1) B. (5,3) C. (

Multiple Linear Regression Linear relationship developed from more than 1 predictor variable Simple linear regression: y b m*x y β 0 β 1 * x 1 Multiple linear regression: y β 0 β 1 *x 1 β 2 *x 2 β n *x n β i is a parameter estimate used to generate the linear curve Simple linear model: β 1 is the slope of the line