Additions To Linear Algebra - UCLA Mathematics

2y ago
13 Views
3 Downloads
916.36 KB
57 Pages
Last View : 13d ago
Last Download : 3m ago
Upload by : Azalea Piercy
Transcription

Additions to Linear AlgebraPeter PetersenSeptember 26, 2012AbstractIn this document we’ve added corrections as well as included severalsections that expand upon the material in the text.1CorrectionsThis is were typos will be listed.5910 Should read M ( 1 , ., n ) 2 Fn : j1 · · · jn k 0o1032 Should read ker (L) im (L0 )Hint for Exercise 2.6.12.b. Many people seem to think that this problemcan only be done using quotient spaces. Here are a few hints towards a solutionthat does not use quotient spaces. First observe that L µL , see also Exercise2.6.7. Let M V be an L-invariant subspace. Let p µL M and factor L µL p · q. Show that M ker (p (L)). If M 6 ker (p (L)) select a complementV ker (p (L)) N and consider the corresponding block decomposition A BL 0 Cwhere A corresponds to the restriction of L to ker (p (L)). Let r be the characteristic polynomial for C. Show that L is a root of p · r by showing thatr (L) () ker (p (L)). Show that µL p · r and reach a contradiction.Ignore Exercise 3.3.142Additional ExercisesExercise 23 gives a beautiful effective algorithm for the Jordan-Chevalley decomposition for linear operators over any field of characteristic 0.1. Show directly that an upper triangular matrix2 11 6 0 22 6A 6 .4 . 00 · · · nn137775

is a root of its characteristic polynomial.2. Show that a linear operator on a finite dimensional complex vector spaceadmits a basis so that it’s matrix representation is upper triangular. Hint:Decompose the vector space in to a direction sum of an eigenspace and acomplement and use induction on dimension.3. Let L : V ! V be a linear operator, where V is not necessarily finitedimensional. If p 2 F [t] has a factorization p p1 · · · pk where the factorspi are pairwise relative prime, thenker (p (L)) ker (p1 (L))···ker (pk (L))4. Hint: Start with k 2. The use induction on k and that pk is relativelyprime to p1 · · · pk 1 .5. Show that if a linear operator on a finite dimensional vector space is irreducible, i.e., it has no nontrivial invariant subspaces, then its minimalpolynomial is irreducible.6. Show that if a linear operator on a finite dimensional vector space is indecomposable, i.e., the vector space cannot be written as a direct sumof nontrivial subspaces, then the minimal polynomial is a power of anirreducible polynomial.32)32)7. Assume that L : V ! V has minimal polynomial mL (t) (t 1) (t33and L (t) (t 1) (t 2) . Find the Jordan canonical form for L.8. Assume that L : V ! V has minimal polynomial mL (t) (t 1) (t43and L (t) (t 1) (t 2) . Find the Jordan canonical form for L.9. Find the Jordan canonical form for the following matrices2301 0 06 80 1 0 77(a) 64 00 0 1 516 0 0 0230 10 06 0 01 0 77(b) 64 0 00 1 512 0 2230 0 016 1 0 0 0 77(c) 64 0 1 0 2 50 0 1 010. Find the Jordan canonical form for the following matrices2

206 0(a) 64 00206 0(b) 64 00216 1(c) 64 21100012000010010001121311 770 50332 771 50300 770 50000011. Find the Jordan canonical form and also a Jordan basis for D each of the following subspaces defined as kernels. 22(a) ker (D 1) (D 1) . 3(b) ker (D 1) (D 1) .ddton(c) ker D2 2D 1 .12. Find the Jordan canonical form on P3 for each of the following operators.(a) L TD, where T (f ) (t) tf (t) .(b) L D T.(c) L TD2 3D 1.13. For 1 , 2 , 3 2 C decide which of the matrices are similar (the answerdepends on how the s are related to each other)23 23 231000101114 01 5,4 00 5,4 00 5,22200000033323 230001114 01 5,4 00 52200003314. For each n give examples of n n matrices that are similar but not unitarilyequivalent.15. Let L : V ! V be a linear operator withL(t)mL (t) (t(t31)n11)m1· · · (t· · · (tk)nkk),mk.

If mi 1 or ni mi 1 for each i 1, ., k, then the Jordan canonicalform is completely determined by L and mL . (Note that for some i wemight have mi 1, while for other j the second condition nj mj 1will hold.) 2216. Let L : R ! R be given bywith respect to the standard basis. Find the rational canonical form and the basis that yields thatform.17. Let A 2 Matn n (R) satisfy A2 for A.1Rn . Find the rational canonical form18. Find the real rational canonical forms for the differentiation operatorD : C 1 (R, R) ! C 1 (R, R)on each of the following kernels of real functions. 2(a) ker D2 1. 2(b) ker D2 D 1.19. Let L : V ! V be a linear operator.(a) If mL (t) p (t) and p is irreducible, then L is semi-simple, i.e.,completely reducible, i.e., every invaraint subspace has an invariantcomplement. Hint: Use thatL Cxi···V C x1(t) mL Cx (t) p (t)Cxk ,iwhere Cxi has no nontrivial invariant subspaces.(b) If mL (t) p1 (t) · · · pk (t) , where p1 , ., pk are distinct irreduciblepolynomials, then L is semi-simple. Hint: Show that if M V is Linvariant thenM (M \ ker (p1 (L)))···(M \ ker (pk (L))) .20. Assume that F L, e.g., R C. Let A 2 Matn n (F). Show that A :Fn ! Fn is semi-simple if and only if A : Ln ! Ln is semi-simple.21. (The generalized Jordan Canonical Form) Let L : V ! V be a linearoperator on a finite dimensional vector space V .(a) Assume thatmL (t) (p (t))4m L(t) ,

where p (t) is irreducible in F [t] . Show that if V Cx , theneij (p (L))i 1(L)j 1(x) ,where i 1, ., m and j 1, ., deg (p) form a basis for V. Hint: Itsuffices to show that they span V.(b) With the assumptions as in a. and k deg (p) show that if we orderthe basis as followsem1 , ., emk , em1,1 , ., em 1,k , ., e11 , ., e1kthen the matrix representation looks2Cp E66 0 Cp66 .4 .E ·0030. 7. 77,7E 5Cp37775where the companion matrix Cp appears on the diagonal, the E matrices right above the diagonal and all other entries are zero.(c) Explain how a. and b. lead to a generalized Jordan canonical formfor any L : V ! V.(d) (The Jordan-Chevalley decomposition ) LetmL (t) (p1 (t))m1· · · (pk (t))mkbe the factorization of the minimal polynomial into distinct irreducible factors. Using the previous exercises show that L S N,where S is semi-simple with mS (t) p1 (t) · · · pk (t), N nilpotent,S p (L) , and N q (L) for suitable polynomials p and q. For adifferent proof that creates an effective algorithm see the next coupleof exercises.22. Let p 2 F [t]. We show how to construct a separable polynomial that hasthe same roots as p in the algebraic closure, i.e., a polynomial withoutrepeated roots in the algebraic closure.(a) Show that q 2 F [t] : p q k for some k 1 is is an ideal and therefore generated by a unique monic polynomial sp .(b) Show that sp p.5

(c) Show that if q 2 sp then q is a constant.(d) Show that if F has characteristic 0, thenpsp .gcd {p, Dp}23. Let L : V ! V be a linear operator on a finite dimensional vector space.Let µ be its minimal polynomial and s sµ the corresponding separablepolynomial, and s0 its derivative. The goal is to show that the JordanChevalley decomposition L S N can be computed via an effectivealgorithm. We know that S has to be semi-simple so it is natural to lookfor solutions to s(S) 0. This suggests that we seek S via Newton’smethodLk 1 LkL0 L(s0 (Lk ))1s (Lk )1where (s ) (t) q (t) is interpreted as a polynomial we get from qs0 ps 1, i.e., the inverse modulo s.0(a) Show that such a q exists and can be computed. Hint: use theprevious exercise.(b) Show thatkXL Lk 1 q (Li ) s (Li )i 0(c) Show that L Lk is nilpotent for all k.(d) Use Taylor’s formula for polynomialsf (t h) f (t) f 0 (t) h h2 g (t, h)to conclude that there is a polynomial g such that2s (Lk 1 ) (s (Lk )) g (Lk ) .(e) Finally let m be the smallest integer so that µ sm and show thatLk is semi-simple provided 2k m.(f) Conclude that with these choices we obtain a Jordan-Chevalley decompositionL Lk L Lk S Nwhere there are suitable polynomial p, r 2 F [t] such that S p (L)and N r (L).24. Use the previous exercise to show that any invertible L : V ! V , whereV is finite dimensional can be written asL SUwhere S is the same semi-simple operator as in the Jordan-Chevalley decomposition, and U is unipotent, i.e., U 1V is nilpotent. Show thatU q (L) for some polynomial q.6

3Linear Algebra in Multivariable CalculusLinear maps play a big role in multivariable calculus and are used in a numberof ways to clarify and understand certain constructions. The fact that linearalgebra is the basis for multivariable calculus should not be surprising as linearalgebra is merely a generalization of vector algebra.Let F : ! Rn be a differentiable function defined on some open domain Rm . The differential of F at x0 2 is a linear map DFx0 : Rm ! Rn thatcan be defined via the limiting processF (x0 th) F (x0 ).tNote that x0 th describes a line parametrized by t passing through x0 andpoints in the direction of h. This definition tells us that DFx0 preserves scalarmultiplication asDFx0 (h) limt!0DFx0 ( h) F (x0 t h) F (x0 )tF (x0 t h) F (x0 ) limt!0t F (x0 t h) F (x0 ) limt !0t F (x0 sh) F (x0 ) lims!0s DFx0 (h) .limt!0Additivity is another matter however. Thus one usually defines F to be differentiable at x0 provided we can find a linear map L : Rm ! Rn satisfyinglim h !0 F (x0 h)F (x0 ) h L (h) 0One then proves that such a linear map must be unique and then renames itL DFx0 . If F is continuously differentiable, i.e. all of its partial derivativesexist and are continuous, then DFx0 is also given by the n m matrix of partialderivatives0231h1B67CDFx0 (h) DFx0 @4 . 5A 2 26464hm3h1. 7 6 . 7. 54 . 5@Fn@Fnhm@x1@xm3@F1@F1@x1 h1 · · · @xm hm7.5.@Fn@Fn@x1 h1 · · · @xm hm@F1@x1···.···7@F1@xm32

One of the main ideas in differential calculus (of several variables) is thatlinear maps are simpler to work with and that they give good local approximations to differentiable maps. This can be made more precise by observing thatwe have the first order approximationF (x0 h) o (h) lim h !0 h F (x0 ) DFx0 (h) o (h) , 0One of the goals of differential calculus is to exploit knowledge of the linear mapDFx0 and then use this first order approximation to get a better understandingof the map F itself.In case f : ! R is a function one often sees the differential of f defined asthe expression@f@fdf dx1 · · · dxm .@x1@xmHaving now interpreted dxi as a linear function we then observe that df itselfis a linear function whose matrix description is given bydf (h) @f@fdx1 (h) · · · dxm (h)@x1@xm@f@fh1 · · · hm@x1@xm23hi h16 . 7@f@f· · · @x4 . 5.@x1mhmMore generally, if we writethen23F167F 4 . 5 ,FnDFx0with the understanding that23dF167 4 . 5dFn23dF1 (h)67.DFx0 (h) 45.dFn (h)Note how this conforms nicely with the above matrix representation of thedifferential.8

As we shall see in this section many of the things we have learned about linearalgebra can be used to great effect in multivariable calculus. We are going tostudy the behavior of smooth vector functions F : ! Rn , where Rm is anopen domain. The word smooth is somewhat vague but means that functionswill always be at least continuously differentiable, i.e., (x0 , h) ! DFx0 (h) iscontinuous. The main idea is simply that a smooth function F is approximatedvia the differential near any point x0 in the following wayF (x0 h) ' F (z0 ) DFx0 (h) .Since the problem of understanding the linear map h ! DFx0 (h) is muchsimpler and this map also approximates F for small h; the hope is that wecan get some information about F in a neighborhood of x0 through such aninvestigation.The graph of G : ! Rn is defined as the setGraph (G) {(x, G (x)) 2 Rm Rn : x 2 } .We picture it as an m-dimensional curved object. Note that the projectionP : Rm Rn ! Rm when restricted to Graph (G) is one-to-one. This is thekey to the fact that the subset Graph (G) Rm Rn is the graph of a functionfrom some subset of Rm .More generally suppose we have some curved set S Rm n (S stands forsurface). Loosely speaking, such a set is has dimension m if near every pointz 2 S we can decompose the ambient space Rm n Rm Rn in such a waythat the projection P : Rm Rn ! Rm when restricted to S, i.e., P S : S ! Rmis one-to-one near z. Thus S can near z be viewed as a graph by consideringthe function G : U ! Rn , defined via P (x, G (x)) x. The set U Rm is somesmall open set where the inverse to P S exists. Note that, unlike the case of agraph, the Rm factor of Rm n does not have to consist of the first m coordinatesin Rm n , nor does it always have to be the same coordinates for all z. We saythat S is a smooth m-dimensional surface if near every z we can choose thedecomposition Rm n Rm Rn so that the graph functions G are smooth.Example 3.1. Let S z 2 Rm 1 : z 1 be the unit sphere. This is an mdimensional smooth surface. To see this fix z0 2 S. Since z0 ( 1 , ., n 1 ) 6 0,there will be some i so that i 6 0 for all z near z0 . Then we decomposeRm 1 Rm R so that R records the ith coordinate and Rm the rest. Nowconsider the equation for S written out in coordinates z ( 1 , ., n 1 )2 12 · · · i2 · · · n 1 1,and solve it for i in terms of the rest of the coordinatesr 2 i 1 12 · · · bi2 · · · n 1.Depending on the sign of i we can choose the sign in the formula to write Snear z0 as a graph over some small subset in Rm . What is more, since i 6 09

2we have that 12 · · · bi2 · · · n 1 1 for all z ( 1 , ., n 1 ) near z0 . Thusthe function is smooth near ( 1 , ., bi , ., n 1 ) .The Implicit Function Theorem gives us a more general approach to decidewhen surfaces defined using equations are smooth.Theorem 3.2. (The Implicit Function Theorem) Let F : Rm n ! Rn besmooth. If F (z0 ) c 2 Rn and rank (DFz0 ) n, then we can find a coordinatedecomposition Rm n Rm Rn near z0 such that the set S {z 2 Rm n : F (z) c}is a smooth graph over some open set U Rm .Proof. We are not going to give a complete proof this theorem here, but we cansay a few things that might elucidate matters a little. It is convenient to assumec 0, this can always be achieved by changing F to F c if necessary. Notethat this doesn’t change the differential.First let us consider the simple situation where F is linear. Then DF Fand so we are simply stating that F has rank n. This means that ker (F ) is mdimensional. Thus we can find a coordinate decomposition Rm n Rm Rnsuch that the projection P : Rm n Rm Rn ! Rm is an isomorphism whenrestricted to ker (F ) . Therefore, we have an inverse L to P ker(F ) that mapsL : Rm ! ker (F ) Rm n . In this way we have exhibited ker (F ) as a graphover Rm . Since ker (F ) is precisely the set where F 0 we have therefore solvedour problem.In the general situation we use that F (z0 h) ' DFz0 (h) for small h. Thisindicates that it is natural to suppose that near z0 the sets S and {z0 h : h 2 ker (DFz0 )}are very good approximations to each other. In fact the picture we have in mindis that {z0 h : h 2 ker (DFz0 )} is the tangent space to S at z0 . The linear mapDFz0 : Rm n ! Rn evidently is assumed to have rank n and hence nullity m.We can therefore find a decomposition Rm n Rm Rn such that the projection P : Rm n ! Rm is an isomorphism when restricted to ker (DFz0 ) . Thismeans that the tangent space to S at z0 is m-dimensional and a graph.It is not hard to believe that a similar result should be true for S itselfnear z0 . The actual proof can be given using a Newton iteration. In fact ifz0 (x0 , y0 ) 2 Rm Rn and x 2 Rm is near x0 , then we find y y (x) 2 Rnas a solution to F (x, y) 0. This is done iteratively by successively solvinginfinitely many linear systems. We start by using the approximate guess that yis y0 . In order to correct this guess we find the vector y1 2 Rn that solves thelinear equation that best approximates the equation F (x, y1 ) 0 near (x, y0 ) ,i.e.,F (x, y1 ) ' F (x, y0 ) DF(x,y0 ) (y1 y0 ) 0.The assumption guarantees that DF(x0 ,y0 ) Rn : Rn ! Rn is invertible. Since wealso assumed that (x, y) ! DF(x,y) is continuous this means that DF(x,y0 ) Rnwill also be invertible as long as x is close to x0 . With this we get the formula:y1 y0DF(x,y0 ) Rn101(F (x, y0 )) .

Repeating this procedure gives us an iterationDF(x,yn ) Rnyn 1 yn1(F (x, yn )) ,that starts at y0 .It is slightly nasty that we have to keep inverting the map DF(x,yn ) Rn asyn changes. It turns out that one is allowed to always use the approximatedifferential DF(x0 ,y0 ) Rn . This gives us the much simpler iterationyn 1 ynDF(x0 ,y0 ) Rn1(F (x, yn )) .It remains to show that the sequence (yn )n2N0 converges and that the correspondence x ! y (x) thus defined, gives a smooth function that solves F (x, y (x)) 0. Note, however, that if yn ! y (x) , then we havey (x) lim yn 1 lim ynn!1 n!1 lim ynn!1 y (x) y (x)DF(x0 ,y0 ) Rn1(F (x, yn ))1 lim DF(x0 ,y0 ) Rn(F (x, yn )) 1DF(x0 ,y0 ) RnF x, lim ynn!1n!1DF(x0 ,y0 ) Rn1(F (x, y (x))) .1Thus DF(x0 ,y0 ) Rn(F (x, y (x))) 0 and hence F (x, y (x)) 0 as desired.The convergence of (yn )n2N0 hinges on the completeness of real numbers but canotherwise be handled when we have introduced norms. Continuity requires someknowledge of uniform convergence of functions. Smoothness can be checkedusing continuity of x ! y (x) and smoothness of F .The Implicit Function Theorem gives us the perfect criterion for decidingwhen solutions to equations give us nice surfaces.Corollary 3.3. Let F : Rm n ! Rn be smooth and defineSc z 2 Rm n : F (z) c .If rank (DFz ) n for all z 2 S, then S is a smooth m-dimensional surface.Note that F : Rm n ! Rn is a collection of n functions F1 , ., Fn . If wewrite c (c1 , ., cn ) we see that the set Sc is the intersection of the sets Sci {z 2 Rm n : Fi (z) ci } . We can apply the above corollary to each of these setsand see that they form m n 1 dimensional surfaces provided DFi dFi alwayshas rank 1 on Sci . This is quite easy to check since this simply means that dFiis never zero. Each of the linear functions dFi at some specified point z 2 Rm ncan be represented as 1 (m n) row matrices via the partial derivatives forFi . Thus they lie in a natural vector space and when stacked on top of eachother yield the matrix for DF. The rank condition on DF for ensuring that11

Sc is a smooth m-dimensional surface on the other hand is a condition on thecolumns of DF. Now matrices do satisfy the magical condition of having equalrow and column rank. Thus DF has rank n if and only if it has row rank n.The latter statement is in turn equivalent to saying that dF1 , ., dFn are linearlyindependent or equivalently span an n-dimensional subspace of Mat1 (n m) .Recall that we say that a function f : Rm ! R, has a critical point atx0 2 Rm if dfx0 0. One reason why these points are important lies in the factthat extrema, i.e., local maxima and minima, are critical points. To see thisnote that if x0 is a local maximum for f, thenf (x0 h) f (x0 ) ,for small h. Sincedfx0 (h) limt!0f (x0 th)tf (x0 ),we have thatdfx0 (h) 0,for all h! This is not possible unless dfx0 0. Note that the level sets Sc {x : f (x) c} must have the property that either they contain a critical pointor they are (n 1)-dimensional smooth surfaces.To make things more interesting let us see what happens when we restrict orconstrain a function f : Rm n ! R to a smooth surface Sc {z : F (z) c} .Having extrema certainly makes sense so let us see what happens if we assumethat f (z) f (z0 ) for all z 2 Sc near z0 . Note that this is not as simple asthe unconstrained situation. To simplify the situation let us assume that wehave decomposed Rm n Rm Rn (and coordinates are written z (x, y) 2Rm Rn ) near z0 and written Sc as a graph of G : U ! Rn , where U Rm . Thenf : Sc ! R can near z0 be thought of as simply g (x) f (x, G (x)) : U ! R.So if f Sc has a local maximum at z0 , then g will have a local maximum at x0 .Since the maximum for g is unconstrained we then conclude dgx0 0. Usingthe chain rule on g (x) f (x, G (x)) , this leads us to0 dgx0 (h) dfz0 (h, DGx0 (h)) .Note that the vectors (h, DGx0 (h)) are precisely the tangent vectors to thegraph of G at (x0 , y0 ) z0 . We see that the relationship F (x, G (x)) 0 whendifferentiated gives DFz0 (h, DG (h)) 0. Thus ker (DFz0 ) {(h, DGx0 (h)) , h 2 Rn } .This means that if we define z0 2 Sc to be critical for f Sc when dfz0 vanisheson ker (DFz0 ) , then we have a definition which again guarantees that local extrema are critical. Since it can be nasty to calculate ker (DFz0 ) and check thatdfz0 vanishes on the kernel we seek a different condition for when this happens. Recall that each of dF1 , ., dFn vanish on ker (DFz0 ) , moreover as wesaw these linear maps are linearly independent. We also know that the dimension of the space of linear maps Rm n ! R that vanish on the m-dimensionalspace ker (DFz0 ) must have dimension n. Thus dF1 , ., dFn form a basis for this12

space. This means that dfz0 vanishes on ker (DFz0 ) if and only if we can find1 , ., n 2 R such thatdfz0 1 dF1 z0 ··· n dFn z0 .Using s for the numbers 1 , ., n is traditional, they are called Lagrange multipliers.Note that we have completely ignored the boundary of the domain andalso boundaries of the smooth surfaces. This is mostly so as not to complicatematters more than necessary. While it is not possible to ignore the boundary ofdomains when discussing optimization, it is possible to do so when dealing withsmooth surfaces. Look, e.g., at the sphere as a smooth surface. The crucial factthat the sphere shares with other “closed” smooth surfaces is that it is compactwithout having boundary. What we are interested in gaining in the use of suchsurfaces is the guarantee that continuous functions must have a maximum anda minimum.Another important question in multivariable calculus is when a smooth function can be inverted and still remain smooth. An obvious condition is that itbe bijective, but a quick look at f : R ! R defined by f (x) x3 shows thatthis isn’t enough. Assume for a minute that F : ! Rn has an inverseG : F ( ) ! Rm that is also smooth. Then we have G F (x) x andF G (y) y. Taking derivatives and using the chain rule tells usDGF (x) DFx 1R m ,DFG(y) DFx 1R n .This means that the differentials themselves are isomorphisms and that n m.It turns us that this is precisely the correct condition for ensuring smoothnessof the inverse.Theorem 3.4. (The Inverse Function Theorem) Let F : ! Rm be smoothand assume that we have x0 2 where DFx0 is an isomorphism. Then we canfind neighborhoods U of x0 and V of F (x0 ) such that F : U ! V is a bijection,that has a smooth inverse G : V ! U.Corollary 3.5. Let F : ! Rm be smooth and assume that F is one-to-oneand that DFx is an isomorphism for all x 2 , then F ( ) Rm is an opendomain and there is a smooth inverse G : F ( ) ! .It is not hard to see that the Inverse Function Theorem follows from theImplicit Function Theorem and vice versa. Note that, when m 1, havingnonzero derivative is enough to ensure that the function is bijective as it mustbe strictly monotone. When m 2, this is no longer true as can be seen fromF : C ! C {0} defined by F (z) ez . As a two variable function it canalso be represented by F ( , ) e (cos , sin ) . This function maps onto thepunctured plane, but all choices n2 , n 2 N0 yield the same values for F.The differential is represented by the matrix cossin DF e,sincos13

that has an inverse given bye cossinsincos.So the map is locally, but not globally invertible.Linearization procedures can be invoked in trying to understand several othernonlinear problems. As an example one can analyze the behavior of a fixed pointx0 for F : Rn ! Rn , i.e., F (x0 ) x0 , using the differential DFx0 since we knowthat F (x0 h) ' x0 DFx0 (h) .3.1Exercises1. We say that F : ! R depends functionally on a collection of functionsF1 , ., Fm : ! R near x0 2 if F (F1 , ., Fm ) near x0 for somefunction . We say that F1 , ., Fm : ! R near x0 2 are functionallyindependent if none of the functions are functionally dependent on therest near x0 .(a) Show that if dF1 x0 , ., dFm x0 are linearly independent as linear functionals, then F1 , ., Fm are also functionally independent near x0 .(b) Assume that Rn and m n. Show that, if span {dF1 x0 , ., dFm x0 }has dimension n, then we can find Fi1 , ., Fin such that all the otherfunctions Fj1 , ., Fjm n depend functionally on Fi1 , ., Fin near x0 .4NormsBefore embarking on the richer theory of inner products we wish to cover themore general notion of a norm. A norm on a vector space is simply a way ofassigning a length or size to each vector. We are going to confine ourselves tothe study of vector spaces where the scalars are either real or complex. If V isa vector space, then a norm is a function k·k : V ! [0, 1) that satisfies1. If kxk 0, then x 0.2. The scaling condition: k xk kxk , where is either a real or complexscalar.3. The Triangle Inequality: kx yk kxk kyk .The first condition just says that the only vector of norm zero is the zero vector.The second condition on scaling conforms to our picture of how the lengthof a vector changes as we scale it. When we allow complex scalars we notethat multiplication by i does not change the size of the vector. Finally thethird and truly crucial condition states the fact that in any triangle the sumof two sides is always longer than the third. We can see this by letting threevectors x, y, z be the vertices of the triangle and agreeing that the three numbers14

kx zk , kx yk , ky zk measure the distance between the vertices, i.e., theside lengths. The triangle inequality now sayskxzk kxyk kyzk .An important alternative version of the triangle inequality is the inequality kxkkyk kxThis is obtained by noting that kxkxk kyk yk .yk kyxk andkyk kxyk ,kxk kyxk .There are a plethora of interesting norms on the vector spaces we haveconsidered so far. We shall not establish the three axioms for the norms defined.It is, however, worth pointing out that while the first two properties are usuallyeasy to establish, the triangle inequality can be very tricky to prove.Example 4.1. The most basic example is Rn or Cn with the euclidean normq22kxk2 x1 · · · xn .2This norm evidently comes from the inner product via x 2 (x x) . Thesubscript will be explained in the next example.We stick to Rn or Cn and define two new normskxk1kxk1 x1 · · · xn ,max { x1 , ., xn } .Note thatMore generally for pIf p q we havekxk1 kxk2 kxk1 n kxk1 .1 we have the p-normqppkxkp p x1 · · · xn .kxk1 kxkq kxkp ppn kxk1 .The trick that allows us to conclude that kxkq kxkp is by first noting thatboth norms have the scaling property. Thus it suffices to show the inequalitywhen kxkq 1. This means that we need to show thatppqq x1 · · · xn when1 x1 · · · xn 1.15

In this case we know that xi 1. Thusq xi xi pas q p. This implies the inequality.In addition,pkxkp p n kxk1solim kxkp kxk1 .p!1This explains all of the subscripts for these norms and also how they relate toeach other.Of all these norms only the 2-norm comes from an inner product. The othernorms can be quite convenient at times when one is studying analysis. The2-norm and the 1-norm will be used below to justify certain claims we made inthe first and second chapter regarding differential equations and multivariablecalculus. We shall also see that for linear operators there are two equally naturalnorm concepts, were only one comes from an inner product.Example 4.2. The p-norm can be generalized to functions using integrationrather than summation. We let V C 0 ([a, b] , C) and defineˆkf kp bp f (t) dta! p1.This time the relation between the norms is quite different. If p q, then1kf kp (b1qa) pkf kq ,or in a more memorable form using normalized integrals:(ba)1pkf kpbaˆbaˆ1 b1 b (ba)1qkf k1 limp!1baˆabp f (t) dtkf k1 sup f (t) .16! q1kf kq .Here the 1-norm is defined ast2[a,b]q f (t) dtMoreover,1! p1 f (t) dtaap! p1.

Assuming that f is continuous this supremum is a maximum, i.e., f (t) has amaximum value that we define to be kf k1 . See also the next section for moreon this 1-norm.Aside from measuring the size of vectors the norm is used to define convergence on vector spaces. We say that a sequence xn 2 V converges to x 2 Vwith respect to the norm k·k if kxn xk ! 0 as n ! 1. Clearly this conceptdepends on having a norm and might even take on different meanings dependingon what norm we use. Note, however, that the norms we defined on Rn and Cnare related to each other viapk·k1 k·kp p n k·k1 .Thus convergence in the p-norm and convergence in the 1-norm means thesame thing. Hence all of these norms yield the same convergence concept.For the norms on C 0 ([a, b] , C) a very different picture emerges. We knowthat111(b a) p kf kp (b a) q kf kq (b a) kf k1 .Thus convergence in the 1-norm or in the q-norm implies convergence in thep-norm for p q. The converse is, however, not at all true.Example 4.3. Let [a, b] [0, 1] and define fn (t) tn . We note thatr1kfn kp p! 0 as n ! 1.np 1Thus fn converges to the zero function in all of the p-norms when p 1. Onthe other handkf k1 1so fn does not converge to the zero function, or indeed any continuous function,in the 1-norm.If V and W both have norms then we can also define a norm on Hom (V, W ) .This norm, known as the operator norm, is defined so that for L : V ! W wehavekL (x)k kLk kxk .Using the scaling properties of the norm and linearity of L this is the same assaying xL kLk , for x 6 0.kxkSincexkxk 1, we can then define the operator norm askLk sup kL (x)k .kxk 1It might happen that this norm is infinite. We say that L is bounded if kLk 1and unbounded if kLk 1. Note that

3 Linear Algebra in Multivariable Calculus Linear maps play a big role in multivariable calculus and are used in a number of ways to clarify and understand certain constructions. The fact that linear algebra is the basis for multivariable calculus should not be surprising as linear algebra is merely a gen

Related Documents:

Robert Gerver, Ph.D. North Shore High School 450 Glen Cove Avenue Glen Head, NY 11545 gerverr@northshoreschools.org Rob has been teaching at . Algebra 1 Financial Algebra Geometry Algebra 2 Algebra 1 Geometry Financial Algebra Algebra 2 Algebra 1 Geometry Algebra 2 Financial Algebra ! Concurrently with Geometry, Algebra 2, or Precalculus

INTRODUCTION TO LINEAR ALGEBRA AND S-LINEAR ALGEBRA 1.1 Basic properties of linear algebra 7 1.2 Introduction to s-linear algebra 15 1.3 Some aapplications of S-linear algebra 30 Chapter Two INTRODUCTORY COCEPTS OF BASIC BISTRUCTURES AND S-BISTRUCTU

“Enumerative Combinatorics” (Spring 2016), UCLA Math 184 “Combinatorial Theory” (Fall 2012-16, 18-19, Win 2013-18), UCLA Math 206AB “Tilings” (Spring 2013), UCLA Math 285 “Introduction to Discrete Structures” (Fall 2012-13, Spring 2015, 2017), UCLA Math 61 “Combinatorics” (Spring 2011, 2012, 2014), UCLA Math 180 “Combinat

EMBA: apply.anderson.ucla.edu/apply UCLA-NUS Executive MBA: ucla.nus.edu.sg PROGRAM CONTACT INFORMATION UCLA Anderson School of Management 110 Westwood Plaza Los Angeles, CA 90095 Full-time MBA: (310) 825-6944 mba.admissions@anderson.ucla.edu FEMBA: (310) 825-2632 femba.admissions@anderson.ucla.edu

Sep 07, 2020 · 06 - Linear Algebra Review De ning Matrices Basic Matrix Operations Special Types of Matrices Matrix Inversion Properties of Matrices Operations of Matrices Simple Linear Regression References OverviewI We wrap up the math topics by reviewing some linear algebra concepts Linear algebra

MTH 210: Intro to Linear Algebra Fall 2019 Course Notes Drew Armstrong Linear algebra is the common denominator of modern mathematics. From the most pure to the most applied, if you use mathematics then you will use linear algebra. It is also a relatively new subject. Linear algebra as we

High-level description of course goals: 1. linear algebra theory; 2. linear algebra computa-tional skills; 3. introduction to abstract math. Today’s topic: introduction to linear algebra. Conceptually, linear algebra is about sets of quantities (a.k.a. vectors

Astrophysics also receives tactical-level advice from the external science community via the Astrophysics Subcommittee of the NASA Advisory Council, and advice on cooperative activities from the Congressionally chartered, National Science Foundation (NSF)-managed Astronomy and Astrophysics Advisory Committee. NASA enables research to understand the structure, content, and evolution of the .