1 Vector Spaces - MIT ESP

2y ago
7 Views
2 Downloads
803.87 KB
51 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Jerry Bolanos
Transcription

Algebra Math Notes Study GuideLinear Algebra1Vector Spaces1-1Vector SpacesA vector space (or linear space) V over a field F is a set on which the operations addition( ) and scalar multiplication, are defined so that for all π‘₯, 𝑦, 𝑧 𝑉 and all π‘Ž, 𝑏 𝐹,0. π‘₯ 𝑦 and π‘Žπ‘₯ are unique elements in V.Closure1. π‘₯ 𝑦 𝑦 π‘₯Commutativity of Addition2.Associativity of Additionπ‘₯ 𝑦 𝑧 π‘₯ (𝑦 𝑧)3. There exists 0 𝑉 such that for every π‘₯ 𝑉, π‘₯ 0 π‘₯. Existence of AdditiveIdentity (Zero Vector)4. There exists an element – π‘₯ such that π‘₯ π‘₯ 0.Existence of AdditiveInverse5. 1π‘₯ π‘₯Multiplicative Identity6.Associativity of Scalarπ‘Žπ‘ π‘₯ π‘Ž(𝑏π‘₯)Multiplication7. π‘Ž π‘₯ 𝑦 π‘Žπ‘₯ π‘Žπ‘¦Left Distributive Property8.Right Distributive Propertyπ‘Ž 𝑏 π‘₯ π‘Žπ‘₯ 𝑏π‘₯Elements of F, V are scalars, vectors, respectively. F can be ℝ, β„‚, β„€/𝑝, etc.Examples:𝐹𝑛𝐹 π‘€π‘š 𝑛 (𝐹) or 𝐹 π‘š 𝑛ℱ(𝑆, 𝐹)𝑃 𝐹 or 𝐹[π‘₯]𝐢 π‘Ž, 𝑏 , 𝐢 n-tuples with entries from Fsequences with entries from Fmxn matrices with entries from Ffunctions from set S to Fpolynomials with coefficients from Fcontinuous functions on π‘Ž, 𝑏 , ( , )Cancellation Law for Vector Addition: If π‘₯, 𝑦, 𝑧 𝑉 and π‘₯ 𝑧 𝑦 𝑧, then π‘₯ 𝑦.Corollary: 0 and -x are unique.For all π‘₯ 𝑉, π‘Ž 𝐹, 0π‘₯ 0 π‘₯0 0 π‘Ž π‘₯ π‘Žπ‘₯ π‘Ž( π‘₯)1-2SubspacesA subset W of V over F is a subspace of V if W is a vector space over F with the operationsof addition and scalar multiplication defined on V.π‘Š 𝑉 is a subspace of V if and only if1. π‘₯ 𝑦 π‘Š whenever π‘₯ π‘Š, 𝑦 π‘Š.2. 𝑐π‘₯ π‘Š whenever 𝑐 𝐹, π‘₯ π‘Š.A subspace must contain 0.

Any intersection of subspaces of V is a subspace of V.If S1, S2 are nonempty subsets of V, their sum is 𝑆1 𝑆2 {π‘₯ 𝑦 π‘₯ 𝑆1 , 𝑦 𝑆2 }.V is the direct sum of W1 and W 2 (𝑉 π‘Š1 π‘Š2 ) if W 1 and W 2 are subspaces of V suchthat π‘Š1 π‘Š2 {0} and π‘Š1 π‘Š2 𝑉. Then each element in V can be written uniquely as𝑀1 𝑀2 where 𝑀1 π‘Š1 , 𝑀2 π‘Š2 . π‘Š1 , π‘Š2 are complementary.π‘Š1 π‘Š2 (π‘Š1 π‘Š2 ) is the smallest subspace of V containing W 1 and W 2, i.e. any subspacecontaining W 1 and W 2 contains π‘Š1 π‘Š2 .For a subspace W of V, 𝑣 π‘Š {𝑣 𝑀 𝑀 π‘Š} is the coset of W containing v. 𝑣1 π‘Š 𝑣2 π‘Š iff 𝑣1 𝑣2 π‘Š. The collection of cosets 𝑉 π‘Š {𝑣 π‘Š 𝑣 𝑉} is called the quotient (factor) spaceof V modulo W. It is a vector space with the operationso (𝑣1 π‘Š) 𝑣2 π‘Š 𝑣1 𝑣2 π‘Šo π‘Ž 𝑣 π‘Š π‘Žπ‘£ π‘Š1-3Linear Combinations and DependenceA vector 𝑣 𝑉 is a linear combination of vectors of 𝑆 𝑉 if there exist a finite number ofvectors 𝑒1 , 𝑒2 , 𝑒𝑛 𝑆 and scalars π‘Ž1 , π‘Ž2 , π‘Žπ‘› 𝐹 such that𝑣 π‘Ž1 𝑒1 π‘Žπ‘› 𝑒𝑛 .v is a linear combination of 𝑒1 , 𝑒2 , 𝑒𝑛 .The span of S, span(S), is the set consisting of all linear combinations of the vectors in S.By definition, span πœ™ {0}. S generates (spans) V if span(S) V.The span of S is the smallest subspace containing S, i.e. any subspace of V containing Scontains span(S).A subset 𝑆 𝑉 is linearly (in)dependent if there (do not) exist a finite number of distinctvectors 𝑒1 , 𝑒2 , 𝑒𝑛 𝑆 and scalars π‘Ž1 , π‘Ž2 , π‘Žπ‘› , not all 0, such thatπ‘Ž1 𝑒1 π‘Žπ‘› 𝑒𝑛 0.Let S be a linearly independent subset of V. For 𝑣 𝑆 𝑉, 𝑆 {𝑣} is linearly dependent iff𝑣 span(𝑆).1-4Bases and DimensionA (ordered) basis Ξ² for V is a (ordered) linearly independent subset of V that generates V.Ex. 𝑒1 1,0, 0 , 𝑒2 0,1, 0 , 𝑒𝑛 (0,0, 1) is the standard ordered basis for 𝐹 𝑛 .A subset Ξ² of V is a basis for V iff each 𝑣 𝑉 can be uniquely expressed as a linearcombination of vectors of Ξ².Any finite spanning set S for V can be reduced to a basis for V (i.e. some subset of S is abasis).Replacement Theorem: (Steinitz) Suppose V is generated by a set G with n vectors, and letL be a linearly independent subset of V with m vectors. Then π‘š 𝑛 and there exists a

subset H of G containing 𝑛 π‘š vectors such that 𝐿 𝐻 generates V.Pf. Induct on m. Use induction hypothesis for {𝑣1 , π‘£π‘š }; remove a 𝑒1 and replace by π‘£π‘š 1 .Corollaries: If V has a finite basis, every basis for V contains the same number of vectors. Theunique number of vectors in each basis is the dimension of V (dim(V)). Suppose dim(V) n. Any finite generating set/ linearly independent subset contains n/ n elements, can be reduced/ extended to a basis, and if the set contains nelements, it is a basis.Subsets of V, dim(V) nBasis (n elements)LinearlyIndependentSets ( nelements)GeneratingSets ( nelements)Let W be a subspace of a finite-dimensional vector space V. Then dim(W) dim(V). Ifdim(W) dim(V), then W V.dim π‘Š1 π‘Š2 dim π‘Š1 dim π‘Š2 dim (π‘Š1 π‘Š2 )dim 𝑉 dim π‘Š dim (𝑉 π‘Š)The dimension of V/W is called the codimension of V in W.1-5Infinite-Dimensional Vector SpacesLet β„± be a family of sets. A member M of β„± is maximal with respect to set inclusion if M iscontained in no member of β„± other than M. (β„± is partially ordered by .)A collection of sets π’ž is a chain (nest, tower) if for each A, B in π’ž, either 𝐴 𝐡 or 𝐡 𝐴. (β„±is totally ordered by .)Maximal Principle: [equivalent to Axiom of Choice] If for each chain π’ž β„±, there exists amember of β„± containing each member of π’ž, then β„± contains a maximal member.A maximal linearly independent subset of 𝑆 𝑉 is a subset B of S satisfying(a) B is linearly independent.(b) The only linearly independent subset of S containing B is B.Any basis is a maximal linearly independent subset, and a maximal linearly independent

subset of a generating set is a basis for V.Let S be a linearly independent subset of V. There exists a maximal linearly independentsubset (basis) of V that contains S. Hence, every vector space has a basis.Pf. β„± linearly independent subsets of V. For a chain π’ž, take the union of sets in π’ž, andapply the Maximal Principle.Every basis for a vector space has the same cardinality.Suppose 𝑆1 𝑆2 𝑉, S1 is linearly independent and S2 generates V. Then there exists abasis such that 𝑆1 𝛽 𝑆2 .Let Ξ² be a basis for V, and S a linearly independent subset of V. There exists 𝑆1 𝛽 so𝑆 𝑆1 is a basis for V.1-6ModulesA left/right R-module 𝑅 𝑀/𝑀𝑅 over the ring R is an abelian group (M, ) with addition andscalar multiplication (𝑅 𝑀 𝑀 or 𝑀 𝑅 𝑀) defined so that for all π‘Ÿ, 𝑠 𝑅 and π‘₯, 𝑦 𝑀,LeftRight1. Distributiveπ‘Ÿ π‘₯ 𝑦 π‘Ÿπ‘₯ π‘Ÿπ‘¦π‘₯ 𝑦 π‘Ÿ π‘₯π‘Ÿ π‘¦π‘Ÿ2. Distributiveπ‘Ÿ 𝑠 π‘₯ π‘Ÿπ‘₯ 𝑠π‘₯π‘₯ π‘Ÿ 𝑠 π‘₯π‘Ÿ π‘₯𝑠3. Associativeπ‘₯π‘Ÿ 𝑠 π‘₯(π‘Ÿπ‘ )π‘Ÿ 𝑠π‘₯ π‘Ÿπ‘  π‘₯4. Identity1π‘₯ π‘₯π‘₯1 π‘₯Modules are generalizations of vector spaces. All results for vector spaces hold exceptones depending on division (existence of inverse in R). Again, a basis is a linearlyindependent set that generates the module. Note that if elements are linearly independent,it is not necessary that one element is a linear combination of the others, and bases do notalways exist.A free module with n generators has a basis with n elements. V is finitely generated if itcontains a finite subset spanning V. The rank is the size of the smallest generating set.Every basis for V (if it exists) contains the same number of elements.1-7AlgebrasA linear algebra over a field F is a vector space π’œ over F with multiplication of vectorsdefined so that for all π‘₯, 𝑦, 𝑧 π’œ, 𝑐 𝐹,1. Associativeπ‘₯ 𝑦𝑧 π‘₯𝑦 𝑧2. Distributiveπ‘₯ 𝑦 𝑧 π‘₯𝑦 π‘₯𝑧, π‘₯ 𝑦 𝑧 π‘₯𝑧 𝑦𝑧3.𝑐 π‘₯𝑦 𝑐π‘₯ 𝑦 π‘₯(𝑐𝑦)If there is an element 1 π’œ so that 1π‘₯ π‘₯1 π‘₯, then 1 is the identity element. π’œ iscommutative if π‘₯𝑦 𝑦π‘₯.Polynomials made from vectors (with multiplication defined as above), lineartransformations, and 𝑛 𝑛 matrices (see Chapters 2-3) all form linear algebras.

2Matrices2-1MatricesA π‘š 𝑛 matrix has m rows and n columns arranged filled with entries from a field F (or ringR). 𝐴𝑖𝑗 𝐴(𝑖, 𝑗) denotes the entry in the ith column and jth row of A. Addition and scalarmultiplication is defined component-wise:𝐴 𝐡 𝑖𝑗 𝐴𝑖𝑗 𝐡𝑖𝑗𝑐𝐴 𝑖𝑗 𝑐𝐴𝑖𝑗The 𝑛 𝑛 matrix of all zeros is denoted π’ͺ𝑛 or just O.2-2Matrix Multiplication and InversesMatrix product:Let A be a π‘š 𝑛 and B be a 𝑛 𝑝 matrix. The product AB is the π‘š 𝑝 matrix with entries𝑛𝐴𝐡𝑖𝑗 π΄π‘–π‘˜ π΅π‘˜π‘— , 1 𝑖 π‘š, 1 𝑗 π‘π‘˜ 1Interpretation of the product AB:1. Row picture: Each row of A multiplies the whole matrix B.2. Column picture: A is multiplied by each column of B. Each column of AB is a linearcombination of the columns of A, with the coefficients of the linear combination beingthe entries in the column of B.3. Row-column picture: (AB)ij is the dot product of row I of A and column j of B.4. Column-row picture: Corresponding columns of A multiply corresponding rows of Band add to AB.Block multiplication: Matrices can be divided into a rectangular grid of smaller matrices, orblocks. If the cuts between columns of A match the cuts between rows of B, then you canmultiply the matrices by replacing the entries in the product formula with blocks (entry i,j isreplaced with block i,j, blocks being labeled the same way as entries).The identity matrix In is a nxn square matrix with ones down the diagonal, i.e.1 if 𝑖 𝑗𝐼𝑛 𝑖𝑗 𝛿𝑖𝑗 0 if 𝑖 𝑗A is invertible if there exists a matrix A-1 such that 𝐴𝐴 1 𝐴 1 𝐴 𝐼. The inverse is unique,and for square matrices, any inverse on one side is also an inverse on the other side.Properties of Matrix Multiplication (A is mxn):1. 𝐴 𝐡 𝐢 𝐴𝐡 𝐴𝐢Left distributive2.Right distributive𝐴 𝐡 𝐢 𝐴𝐢 𝐡𝐢3. πΌπ‘š 𝐴 𝐴 𝐴𝐼𝑛Left/ right identity4. 𝐴 𝐡𝐢 𝐴𝐡 𝐢Associative5. π‘Ž 𝐴𝐡 π‘Žπ΄ 𝐡 𝐴(π‘Žπ΅)6.𝐴𝐡 1 𝐡 1 𝐴 1 (A, B invertible)𝐴𝐡 𝐡𝐴: Not commutativeNote that any 2 polynomials of the same matrix commute.A nxn matrix A is either a zero divisor (there exist nonzero matrices B, C such that 𝐴𝐡 𝐢𝐴 π’ͺ) or it is invertible.

The Kronecker (tensor) product of pxq matrix A and rxs matrix B isπ‘Ž11 𝐡 π‘Ž1π‘ž 𝐡 . If v and w are column vectors with q, s elements,𝐴 𝐡 π‘Žπ‘1 𝐡 π‘Žπ‘π‘ž 𝐡𝐴 𝐡 𝑣 𝑀 (𝐴𝑣) (𝐡𝑀). Kronecker products give nice eigenvalue relations- forexample the eigenvalues are the products of those of A and B. [AMM 107-6, 6/2000]2-3Other Operations, ClassificationThe transpose of a mxn matrix A, At, is defined by 𝐴𝑇 𝑖𝑗 𝐴𝑗𝑖 .The adjoint or Hermitian of a matrix A is its conjugate transpose:𝐴 𝐴𝐻 𝐴𝑇NameDefinitionProperties𝑇Symmetric𝐴 𝐴Self-adjoint/ Hermitian𝐴 𝐴 𝑧 𝐴𝑧 is real for any complex z.Skew-symmetric 𝐴 𝐴𝑇Skew-self-adjoint/ Skew-Hermitian 𝐴 𝐴 Upper triangular𝐴𝑖𝑗 0 for 𝑖 𝑗Lower triangular𝐴𝑖𝑗 0 for 𝑖 𝑗Diagonal𝐴𝑖𝑗 0 for 𝑖 𝑗Properties of Transpose/ Adjoint1. 𝐴𝐡 𝑇 𝐡 𝑇 𝐴𝑇 , 𝐴𝐡 𝐡 𝐴 (For more matrices, reverse the order.)2. (𝐴 1 )𝑇 𝐴𝑇 13. 𝐴π‘₯ 𝑇 𝑦 π‘₯ 𝑇 𝐴𝑇 𝑦 π‘₯ 𝑇 (𝐴𝑇 𝑦), 𝐴π‘₯ 𝑦 π‘₯ 𝐴 𝑦 π‘₯ (𝐴 𝑦)4. 𝐴𝑇 𝐴 is symmetric.The trace of a 𝑛 𝑛 matrix A is the sum of its diagonal entries:𝑛tr 𝐴 𝐴𝑖𝑖𝑖 1The trace is a linear operator, and tr 𝐴𝐡 tr 𝐴 tr(𝐡).The direct sum 𝐴 𝐡 of π‘š π‘š and 𝑛 𝑛 matrices A and B is the π‘š 𝑛 (π‘š 𝑛)𝐴 𝑂matrix C given by 𝐢 ,𝑂 𝐡𝐴𝑖𝑗 for 1 𝑖, 𝑗 𝑛𝐢𝑖𝑗 𝐡𝑖 π‘š ,𝑗 π‘š for π‘š 1 𝑖, 𝑗 𝑛 π‘š0, else

3Linear Transformations3-1Linear TransformationsFor vector spaces V and W over F, a function 𝑇: 𝑉 π‘Š is a linear transformation(homomorphism) if for all π‘₯, 𝑦 𝑉 and 𝑐 𝐹,(a) 𝑇(π‘₯ 𝑦) 𝑇(π‘₯) 𝑇(𝑦)(b) 𝑇(𝑐π‘₯) 𝑐𝑇(π‘₯)It suffices to verify 𝑇(𝑐π‘₯ 𝑦) 𝑐𝑇(π‘₯) 𝑇(𝑦).𝑇(0) 0 is automatic.π‘›π‘‡π‘›π‘Žπ‘– π‘₯𝑖 𝑖 1π‘Žπ‘– 𝑇(π‘₯𝑖 )𝑖 1Ex. Rotation, reflection, projection, rescaling, derivative, definite integralIdentity Iv and zero transformation T0An endomorphism (or linear operator) is a linear transformation from V into itself.T is invertible if it has an inverse T-1 satisfying 𝑇𝑇 1 πΌπ‘Š , 𝑇 1 𝑇 𝐼𝑉 . If T is invertible, Vand W have the same dimension (possibly infinite).Vector spaces V and W are isomorphic if there exists a invertible linear transformation (anisomorphism, or automorphism if V W) 𝑇: 𝑉 π‘Š. If V and W are finite-dimensional, theyare isomorphic iff dim(V) dim(W). V is isomorphic to 𝐹 dim V .The space of all linear transformations β„’ 𝑉, π‘Š Hom(𝑉, π‘Š) from V to W is a vector spaceover F. The inverse of a linear transformation and the composite of two lineartransformations are both linear transformations.The null space or kernel is the set of all vectors x in V such that T(x) 0.𝑁 𝑇 {π‘₯ 𝑉 𝑇 π‘₯ 0}The range or image is the subset of W consisting of all images of vectors in V.𝑅 𝑇 {𝑇(π‘₯) π‘₯ 𝑉}Both are subspaces. nullity(T) and rank(T) denote the dimensions of N(T) and R(T),respectively.If 𝛽 {𝑣1 , 𝑣2 , 𝑣𝑛 } is a basis for V, then 𝑅 𝑇 span({𝑇 𝑣1 , 𝑇 𝑣2 , 𝑇(𝑣𝑛 )}).Dimension Theorem: If V is finite-dimensional, nullity(T) rank(T) dim(V).Pf. Extend a basis for N(T) to a basis for V by adding {π‘£π‘˜ 1 , , 𝑣𝑛 }. Show {𝑇(π‘£π‘˜ 1 ), , 𝑇(𝑣𝑛 )}is a basis for R(T) by using linearity and linear independence.T is one-to-one iff N(T) {0}.If V and W have equal finite dimension, the following are equivalent:(a) T is one-to-one.(b) T is onto.(c) rank(T) dim(V)(a) and (b) imply T is invertible.

A linear transformation is uniquely determined by its action on a basis, i.e., if 𝛽 {𝑣1 , 𝑣2 , 𝑣𝑛 } is a basis for V and 𝑀1 , 𝑀2 , 𝑀𝑛 π‘Š, there exists a unique lineartransformation 𝑇: 𝑉 π‘Š such that 𝑇 𝑣𝑖 𝑀𝑖 , 𝑖 1,2, 𝑛.A subspace W of V is T-invariant if 𝑇(π‘₯) π‘Š for every π‘₯ π‘Š. TW denotes the restriction ofT on W.3-2Matrix Representation of Linear TransformationMatrix Representation:Let 𝛽 𝑣1 , 𝑣2 , 𝑣𝑛 be an ordered basis for V and 𝛾 𝑀1 , 𝑀2 , 𝑀𝑛 be an ordered basisfor W. For π‘₯ 𝑉, define π‘Ž1 , π‘Ž2 , π‘Žπ‘› so that𝑛π‘₯ The coordinate vector of x relative to Ξ² isπ‘Žπ‘– 𝑒𝑖𝑖 1π‘Ž1π‘Ž2πœ™π›½ π‘₯ π‘₯ 𝛽 π‘Žπ‘›nNote ϕβ is an isomorphism from V to F . The ith coordinate is 𝑓𝑖 π‘₯ π‘Žπ‘– .Suppose 𝑇: 𝑉 π‘Š is a linear transformation satisfyingπ‘šπ‘‡ 𝑣𝑗 π‘Žπ‘–π‘— 𝑀𝑖 for 1 𝑗 𝑛𝑖 1𝛾𝛾The matrix representation of T in Ξ² and Ξ³ is 𝐴 [𝑇]𝛽 ℳ𝛽 (𝑇) with entries as definedabove. (i.e. load the coordinate representation of 𝑇 𝑣𝑗 into the jth column of A.)Properties of Linear Transformations (Composition)1. 𝑇 π‘ˆ1 π‘ˆ2 π‘‡π‘ˆ1 π‘‡π‘ˆ2Left distributive2.Right distributiveπ‘ˆ1 π‘ˆ2 𝑇 π‘ˆ1 𝑇 π‘ˆ2 𝑇3. 𝐼𝑉 𝑇 𝑇 π‘‡πΌπ‘ŠLeft/ right identity4. 𝑆 π‘‡π‘ˆ 𝑆𝑇 π‘ˆAssociative (holds for any functions)5. π‘Ž π‘‡π‘ˆ π‘Žπ‘‡ π‘ˆ 𝑇(π‘Žπ‘ˆ)6.π‘‡π‘ˆ 1 π‘ˆ 1 𝑇 1 (T, U invertible)Linear transformations [over finite-dimensional vector spaces] can be viewed as leftmultiplication by matrices, so linear transformations under composition and theircorresponding matrices under multiplication follow the same laws. This is amotivating factor for the definition of matrix multiplication. Facts about matrices, suchas associativity of matrix multiplication, can be proved by using the fact that lineartransformations are associative, or directly using matrices.Note: From now on, definitions applying to matrices can also apply to the lineartransformations they are associated with, and vice versa.The left-multiplication transformation 𝐿𝐴 : 𝐹 𝑛 𝐹 π‘š is defined by 𝐿𝐴 π‘₯ 𝐴π‘₯ (A is a mxnmatrix).Relationships between linear transformations and their matrices:1. To find the image of a vector 𝑒 𝑉 under T, multiply the matrix corresponding to T

𝛾𝛾on the left: 𝑇 𝑒 𝛾 [𝑇]𝛽 𝑒 𝛽 i.e. 𝐿𝐴 πœ™π›½ πœ™π›Ύ 𝑇 where 𝐴 [𝑇]𝛽 .2. Let V, W be finite-dimensional vector spaces with bases Ξ², Ξ³. The function𝛾Φ: β„’ 𝑉, π‘Š π‘€π‘š 𝑛 (𝐹) defined by Ξ¦ 𝑇 [𝑇]𝛽 is an isomorphism. So, for lineartransformations π‘ˆ, 𝑇: 𝑉 π‘Š,𝛾𝛾𝛾a. [𝑇 π‘ˆ]𝛽 [𝑇]𝛽 [π‘ˆ]𝛽𝛾𝛾b. [π‘Žπ‘‡]𝛽 π‘Ž[𝑇]𝛽 for all scalars a.c. β„’ 𝑉, π‘Š has dimension mn.3. For vector spaces V, W, Z with bases Ξ±, Ξ², Ξ³ and linear transformations 𝑇: 𝑉 π‘Š,π›½π›Ύπ›Ύπ‘ˆ: π‘Š 𝑍, [π‘ˆπ‘‡]𝛼 [π‘ˆ]𝛽 [𝑇]𝛼 .𝛽𝛾𝛾4. T is invertible iff [𝑇]𝛽 is invertible. Then [𝑇 1 ]𝛾 ( 𝑇 𝛽 ) 1 .3-3Change of CoordinatesLet Ξ² and Ξ³ be two ordered bases for finite-dimensional vector space V. The change of𝛾coordinate matrix (from Ξ²-coordinates to Ξ³-coordinates) is 𝑄 [𝐼𝑉 ]𝛽 . Write vector j of Ξ² interms of the vectors of Ξ³, take the coefficients and load them in the jth column of Q. (This isso (0, 1, 0) gets transformed into the jth column.)1. 𝑄 1 changes Ξ³-coordinates into Ξ²-coordinates.2. 𝑇 𝛾 𝑄 𝑇 𝛽 𝑄 1Two nxn matrices are similar if there exists an invertible matrix Q such that 𝐡 𝑄 1 𝐴𝑄.Similarity is an equivalence relation. Similar matrices are manifestations of the same lineartransformation in different bases.3-4Dual SpacesA linear functional is a linear transformation from V to a field of scalars F. The dual spaceis the vector space of all linear functionals on V: 𝑉 β„’(𝑉, 𝐹). V** is the double dual.If V has ordered basis 𝛽 {π‘₯1 , π‘₯2 , π‘₯𝑛 }, then 𝛽 𝑓1 , 𝑓2 , 𝑓𝑛 (coordinate functionsβ€”thedual basis) is an ordered basis for V*, and for any 𝑓 𝑉 ,𝑛𝑓 𝑓 π‘₯𝑖 𝑓𝑖𝑖 1To find the coordinate representations of the vectors of the dual bases in terms of thestandard coordinate functions:1. Load the coordinate representations of the vectors in Ξ² into the columns of W.2. The desired representation are the rows of π‘Š 1 .3. The two bases are biorthogonal. For an orthonormal basis (see section 5-5), thecoordinate representations of the basis and dual bases are the same.Let V, W have ordered bases Ξ², Ξ³. For a linear transformation 𝑇: 𝑉 π‘Š, define itstranspose (or dual) 𝑇 𝑑 : π‘Š 𝑉 by 𝑇 𝑑 g g𝑇. Tt is a linear transformation satisfying𝛽 [𝑇 𝑑 ]𝛾 𝑇𝛾𝛽𝑑.Define π‘₯: 𝑉 𝐹 by π‘₯ f f(π‘₯), and πœ“: 𝑉 𝑉 by πœ“ π‘₯ π‘₯. (The input is a function; theoutput is a function evaluated at a fixed point.) If V is finite-dimensional, ψ is an

isomorphism. Additionally, every ordered basis for V* is the dual basis for some basis for V.The annihilator of a subset S of V is a subspace of 𝑉 :𝑆 0 Ann(𝑆) {𝑓 𝑉 𝑓 π‘₯ 0 π‘₯ 𝑆}

4Systems of Linear Equations4-1Systems of Linear EquationsThe system of equationsπ‘Ž11 π‘₯1 π‘Žπ‘›1 π‘₯𝑛 𝑏1 π‘Žπ‘š1 π‘₯1 π‘Žπ‘šπ‘› π‘₯𝑛 π‘π‘šπ‘Ž11 π‘Žπ‘›1𝑏1 and 𝑏 . Thecan be written in matrix form as Ax b, where 𝐴 π‘Žπ‘š1 π‘Žπ‘šπ‘›π‘π‘šaugmented matrix is 𝐴 𝑏 (the entries of b placed to the right of A).The system is consistent if it has solution(s). It is singular if it has zero or infinitely manysolutions. If b 0, the system is homogeneous.1. Row picture: Each equation gives a line/ plane/ hyperplane. They meet at thesolution set.2. Column picture: The columns of A combine (with the coefficients π‘₯1 , π‘₯𝑛 ) to produceb.4-2EliminationThere are three types of elementary row/ column operations:(1) Interchanging 2 rows/ columns(2) Multiplying any row/ column by a nonzero scalar(3) Adding any multiple of a row/ column to another row/ columnAn elementary matrix is the matrix obtained by performing an elementary operation on I n.Any two matrices related by elementary operations are (row/column-)equivalent.Performing an elementary row/ column operation is the same as multiplying by thecorresponding elementary matrix on the left/ right. The inverse of an elementary matrixis an elementary matrix of the same type. When an elementary row operation is performedon an augmented matrix or the equation 𝐴π‘₯ 𝑏, the solution set to the correspondingsystem of equations does not change.Gaussian elimination- Reduce a system of equations (line up the variables, the equationsare the rows), a matrix, or an augmented matrix by using elementary row operations.Forward pass1. Start with the first row.2. Excluding all rows before the current row (row j), in the leftmost nonzero column(column k), make the entry in the current row nonzero by switching rows asnecessary. (Type 1 operation) The pivot di is the first nonzero in the current row, therow that does the elimination. [Optional: divide the current row by the pivot to makethe entry 1. (2)]3. Make all numbers below the pivot zero. To make the entry a ik in the ith row 0,subtract row j times the multiplier π‘™π‘–π‘˜ π‘Žπ‘–π‘˜ /𝑑𝑖 from row i. This corresponds tomultiplication by a type 3 elementary matrix π‘€π‘–π‘˜ .4. Move on to the next row, and repeat until only zero rows remain (or rows areexhausted).Backward pass (Back-substitution)5. Work upward, beginning with the last nonzero row, and add multiples of each row to

the rows above to create zeros in the pivot column. When working with equations,this is essentially substituting the value of the variable into earlier equations.6. Repeat for each preceding row except the first.A free variable is any variable corresponding to a column without a pivot. Free variablescan be arbitrary, leading to infinitely many solutions. Express the solution in terms of freevariables.If elimination produces a contradiction (in A b, a row with only the last entry a nonzero,corresponding to 0 a), there is no solution.Gaussian elimination produces the reduced row echelon form of the matrix: (Forward/backward pass accomplished 1, (2), 3/ 4.)1. Any row containing a nonzero entry precedes any zero row.2. The first nonzero entry in each row is 1.3. It occurs in a column to the right of the first nonzero entry in the preceding row.4. The first nonzero entry in each row is the only nonzero entry in its column.The reduced row echelon of a matrix is unique.4-3FactorizationElimination FactorizationPerforming Gaussian elimination on a matrix A is equivalent to multiplying A by a sequenceof elementary row matrices.If no row exchanges are made, π‘ˆ ( 𝐸𝑖𝑗 )𝐴, so A can be factored in the form𝐴 𝐸𝑖𝑗 1 π‘ˆ πΏπ‘ˆwhere L is a lower triangular matrix with 1’s on the diagonal and U is an upper triangularmatrix (note the factors are in opposite order). Note 𝐸𝑖𝑗 and 𝐸𝑖𝑗 1 differ only in the sign ofentry (i,j), and the multipliers go directly into the entries of L. U can be factored into adiagonal matrix D containing the pivots and U’ an upper triangular matrix with 1’s on thediagonal:𝐴 πΏπ·π‘ˆβ€²The first factorization corresponds to the forward pass, the second corresponds tocompleting the back substitution. If A is symmetric, π‘ˆ β€² 𝐿𝑇 .Using 𝐴 πΏπ‘ˆ, πΏπ‘ˆ π‘₯ 𝐴π‘₯ 𝑏 can be split into two triangular systems:1. Solve 𝐿𝑐 𝑏 for c.2. Solve π‘ˆπ‘₯ 𝑐 for x.A permutation matrix P has the rows of I in any order; it switches rows.If row exchanges are required, doing row exchanges1. in advance gives 𝑃𝐴 πΏπ‘ˆ.2. after elimination gives 𝐴 𝐿1 𝑃1 π‘ˆ1 .4-4The Complete Solution to Ax b, the Four SubspacesThe rank of a matrix A is the rank of the linear transformation LA, and the number of pivotsafter elimination.

Properties:1. Multiplying by invertible matrices does not change the rank of a matrix, soelementary row and column matrices are rank-preserving.2. rank(At) rank(A)3. Ax b is consistent iff rank(A) rank(A b).4. Rank inequalitiesLinear transformations T, UMatrices A, Brank(TU) min(rank(T), rank(U)) rank(AB) min(rank(A), rank(B))Four Fundamental Subspaces of A1. The row space C(AT) is the subspace generated by rows of A, i.e. it consists of alllinear combinations of rows of A.a. Eliminate to find the nonzero rows. These rows are a basis for the row space.2. The column space C(A) is the subspace generated by columns of A.a. Eliminate to find the pivot columns. These columns of A (the original matrix)are a basis for the column space. The free columns are combinations ofearlier columns, with the entries of F the coefficients. (See below)b. This gives a technique for extending a linearly independent set to a basis: Putthe vectors in the set, then the vectors in a basis down the columns of A.3. The nullspace N(A) consists of all solutions to 𝐴π‘₯ 0.a. Finding the Nullspace (after elimination)i. Repeat for each free variable x: Set x 1 and all other free variables to0, and solve the resultant system. This gives a special solution for eachfree variable.ii. The special solutions found in (1) generate the nullspace.b. Alternatively, the nullspace matrix (containing the special solutions in its 𝐹𝐼 𝐹columns) is 𝑁 when the row reduced echelon form is 𝑅 . If𝐼0 0columns are switched in R, corresponding rows are switched in N.4. The left nullspace N(AT) consists of all solutions to 𝐴𝑇 π‘₯ 0 or π‘₯ 𝑇 𝐴 0.Fundamental Theorem of Linear Algebra (Part 1):Dimensions of the Four Subspaces: A is mxn, rank(A) r (If the field is complex, replace 𝐴𝑇by 𝐴 .)

Row space 𝐢 𝐴𝑇 {𝐴𝑇 𝑦} Dimension rRow rank column rankColumn space 𝐢(𝐴) {𝐴π‘₯} Dimension r𝐹 π‘š 𝐢(𝐴) 𝑁(𝐴𝑇 )𝐹 𝑛 𝐢 𝐴 𝑇 𝑁(𝐴)Nullspace 𝑁(𝐴) {π‘₯ 𝐴π‘₯ 0} Dimension n-rLeft nullspace 𝑁(𝐴𝑇 ) {𝑦 𝐴𝑇 𝑦 0} Dimension m-rThe relationships between the dimensions can be shown using pivots or the dimensiontheorem.The Complete Solution to Ax b1. Find the nullspace N, i.e. solve Ax 0.2. Find any particular solution xp to Ax b (there may be no solution). Set free variablesto 0.3. The solution set is 𝑁 π‘₯𝑝 ; i.e. all solutions are in the form π‘₯𝑛 π‘₯𝑝 , where π‘₯𝑛 is in thenullspace and π‘₯𝑝 is a particular solution.4-5Inverse MatricesA is invertible iff it is square (nxn) and any one of the following is true:1. 𝐴 has rank n, i.e. 𝐴 has n pivots.2. 𝐴π‘₯ 𝑏 has exactly 1 solution.3. Its columns/ rows are a basis for 𝐹 𝑛 .Gauss-Jordan Elimination: If A is an invertible nxn matrix, it is possible to transform (A In)into (In A-1) by elementary row operations. Follow the same steps as in Gaussianelimination, but on (A In). If A is not invertible, then such transformation leads to a rowwhose first n entries are zeros.

5Inner Product Spaces5-1Inner ProductsAn inner product on a vector space V over F (ℝ or β„‚) is a function that assigns eachordered pair (π‘₯, 𝑦) 𝑉 a scalar π‘₯, 𝑦 , such that for all π‘₯, 𝑦, 𝑧 𝑉 and 𝑐 𝐹,1. π‘₯ 𝑧, 𝑦 π‘₯, 𝑦 𝑧, 𝑦2. 𝑐π‘₯, 𝑦 𝑐 π‘₯, 𝑦 (The inner product is linear in its first component.)3. π‘₯, 𝑦 𝑦, π‘₯ (Hermitian)4. π‘₯, π‘₯ 0 for π‘₯ 0. (Positive)V is called an inner product space, also an Euclidean/ unitary space if F is ℝ/ β„‚.The inner product is conjugate linear in the second component:1. π‘₯, 𝑦 𝑧 π‘₯, 𝑦 π‘₯, 𝑧2. 𝑐π‘₯, 𝑦 𝑐 π‘₯, 𝑦If π‘₯, 𝑦 π‘₯, 𝑧 for all π‘₯ 𝑉 then 𝑦 𝑧.The standard inner product (dot product) of π‘₯ (π‘Ž1 , , π‘Žπ‘› ) and 𝑦 (𝑏1 , , 𝑏𝑛 ) is𝑛π‘₯ 𝑦 π‘₯, 𝑦 π‘Žπ‘– 𝑏𝑖𝑖 1The standard inner product for the space of continuous complex functions H on [0,2πœ‹] is1 2πœ‹π‘“, 𝑔 𝑓 𝑑 𝑔(𝑑) 𝑑𝑑2πœ‹ 0A norm of a vector space is a real-valued function satisfying1. 𝑐π‘₯ 𝑐 π‘₯ , 𝑐 02. π‘₯ 0, equality iff π‘₯ 0.3. Triangle Inequality: π‘₯ 𝑦 π‘₯ 𝑦The distance between two vectors x, y is π‘₯ 𝑦 .In an inner product space, the norm (length) of a vector is π‘₯ Cauchy-Schwarz Inequality: π‘₯, 𝑦 π‘₯5-2π‘₯, π‘₯ .𝑦OrthogonalityTwo vectors are orthogonal (perpendicular) when their inner product is 0. A subset S isorthogonal if any two distinct vectors in S are orthogonal, orthonormal if additionally allvectors have length 1. Subspaces V and W are orthogonal if each 𝑣 𝑉 is orthogonal toeach 𝑀 π‘Š. The orthogonal complement 𝑉 (V perp) of V is the subspace containing allvectors orthogonal to V. (Warning: 𝑉 𝑉 holds when V is finite-dimensional, notnecessarily when V is infinite-dimensional.) When an orthonormal basis is chosen, everyinner product on finite-dimensional V is similar to the standard inner product. Theconditions effectively determine what the inner product has to be.Pythagorean Theorem: If x and y are orthogonal, π‘₯ 𝑦2 π‘₯2 𝑦Fundamental Theorem of Linear Algebra (Part 2):The nullspace is the orthogonal complement of the row space.The left nullspace is the orthogonal complement of the column space.2.

5-3ProjectionsTake 1: Matrix and geometric viewpointThe [orthogonal] projection of 𝑏 onto π‘Ž is𝑏, π‘Žπ‘ π‘Žπ‘Ž 𝑏𝑝 π‘Ž π‘Ž π‘Žπ‘Ž 2π‘Ž π‘Žπ‘Ž π‘Žπ‘₯The last two expressions are for (row) vectors in ℂ𝑛 , using the dot product. (Note: thisshows that π‘Ž 𝑏 π‘Ž 𝑏 cos πœƒ for 2 and 3 dimensions.)Let 𝑆 be a finite orthogonal basis. A vector y is the sum of its projections onto the vectors ofS:𝑦, 𝑣𝑦 𝑣𝑣 2𝑣 𝑆Pf. Write y as a linear combination and take the inner product of y with a vector in the basis;use orthogonality to cancel all but one term.As a corollary, any orthogonal subset is linearly independent.To find the projection of 𝑏 onto a finite-dimensional subspace W, first find an orthonormalbasis for W (see section 5-5), 𝛽. The projection is𝑝 𝑏, 𝑣 𝑣𝑣 𝛽and the error is 𝑒 𝑏 𝑝. 𝑏 is perpendicular to 𝑒, and 𝑝 is the vector in W so that 𝑏 𝑝 isminimal. (Proof uses Pythagorean theorem)Bessel’s Inequality: (Ξ² a basis for a subspace)𝑣 𝛽𝑦,𝑣 2𝑣 22 𝑦, equality iff 𝑦 𝑦,𝑣𝑣 𝛽 𝑣 2𝑣If 𝛽 𝑣1 , , 𝑣𝑛 is an orthonormal basis, then for any linear transformation T,𝑇𝛽 𝑖𝑗 𝑇 𝑣𝑗 , 𝑣𝑖 .Alternatively:Let W be a subspace of β„‚π‘š generated by the linearly independent set {π‘Ž1 , π‘Žπ‘› }. Solving𝐴 𝑏 𝐴π‘₯ 0 𝐴 𝐴π‘₯ 𝐴 𝑏, the projection of π‘Ž onto W is𝑝 𝐴π‘₯ 𝐴 𝐴 𝐴 1 𝐴 𝑏𝑃where P is the projection matrix. In the special case that the set is orthonormal, 𝑄π‘₯ 𝑏 π‘₯ 𝑄 𝑇 𝑏, 𝑝 𝑄𝑄 𝑇 𝑏𝑃A matrix P is a projection matrix iff 𝑃2 𝑃.Take 2: Linear transformation viewpointIf 𝑉 π‘Š1 π‘Š2 then the projection on W 1 along W 2 is defined by𝑇 π‘₯ π‘₯1 when π‘₯ π‘₯1 π‘₯2 ; π‘₯1 π‘Š1 , π‘₯2 π‘Š2T is an orthogonal projection if 𝑅 𝑇 𝑁(𝑇) and 𝑁 𝑇 𝑅(𝑇). A linear operator T is anorthogonal projection iff 𝑇 2 𝑇 𝑇 .5-4Minimal Solutions and Least Squares ApproximationsWhen 𝐴π‘₯ 𝑏 is consistent, the minimal solution is the one with least absolute value.

1. There exists exactly one minimal solution s, and 𝑠 𝐢(𝐴 ).2. s is the only solution to 𝐴π‘₯ 𝑏 in 𝐢(𝐴 ): 𝐴𝐴 𝑒 𝑏 𝑠 𝐴 𝑒 𝐴 𝐴𝐴 1𝑏.The least squares solution π‘₯ makes 𝐸 𝐴π‘₯ 𝑏 2 as small as possible. (Generally,𝐴π‘₯ 𝑏 is inconsistent.) Project b onto the column space of A.To find the real function in the form 𝑦(𝑑) π‘šπ‘– 1 𝐢𝑖 𝑓𝑖 (𝑑)for fixed functions 𝑓𝑖 that is closest to2the points 𝑑1 , 𝑦1 , 𝑑𝑛 , 𝑦𝑛 , i.e. such that the error 𝑒 𝑛𝑖 1 𝑒𝑖2 𝑛𝑖 1 𝑦𝑖 𝑦 𝑑𝑖is least,𝑦1let A be the matrix with 𝐴𝑖𝑗 𝑓𝑖 (𝑑𝑗 ), 𝑏 . Then 𝐴π‘₯ 𝑏 is equivalent to the system𝑦𝑛𝑦 𝑑𝑖 𝑦𝑖 . Now find the projection of 𝑏 onto the columns of 𝐴, by multiplying by 𝐴𝑇 andsolving 𝐴𝑇 𝐴π‘₯ 𝐴𝑇 𝑏. Here, p is the values estimated by the best-fit curve and e gives theerrors in the estimates.Ex. Linear functions 𝑦 𝐢 𝐷𝑑:1 𝑑1𝑛𝑑𝑖 𝐢𝑦𝑖𝐴 .The equation 𝐴𝑇 𝐴π‘₯ 𝐴𝑇 𝑏 becomes .2𝑑𝑖 𝑦𝑖𝐷𝑑𝑑𝑖𝑖1 𝑑𝑛A has orthogonal columns when 𝑑𝑖 0. To produce orthogonal columns, shift the times by𝑑 𝑑𝑦𝑦 𝑑letting 𝑇𝑖 𝑑𝑖 𝑑 𝑑𝑖 1 𝑛 𝑛 . Then 𝐴𝑇 𝐴 is diagonal and 𝐢 𝑛 𝑖 , 𝐷 𝑑𝑖 2 𝑖 . The least𝑖squares line is 𝑦 𝐢 𝐷(𝑑 𝑑).Row space 𝐢 𝐴𝑇 {𝐴𝑇 𝑦} Dimension rColumn space 𝐢(𝐴) {𝐴π‘₯} Dimension rLeast squares solutionMinimal solution to 𝐴π‘₯π‘Ÿ 𝑝π‘₯π‘Ÿπ΄π‘₯π‘Ÿ 𝑏𝐴 𝑝 π‘₯π‘Ÿπ‘π‘π΄π‘₯ 𝑏𝐢 𝐴𝑇 π‘₯ π‘₯π‘Ÿ π‘₯𝑛 𝑁(𝐴)𝐴 𝑏 π‘₯π‘Ÿπ‘ 𝑝 𝑒𝐢(𝐴) 𝑁(𝐴𝑇 )𝐴π‘₯𝑛 0π‘₯𝑛Nullspace 𝑁(𝐴) {π‘₯ 𝐴π‘₯ 0} Dimension n-r5-5𝐴 𝑒 0𝑒Left nullspace 𝑁(𝐴𝑇 ) {𝑦 𝐴𝑇 𝑦

Algebra Math Notes Study Guide Linear Algebra 1 Vector Spaces 1-1 Vector Spaces A vector space (or linear space) V over a field F is a set on which the operations addition ( ) and scalar multiplication, are defined so that for all , , an

Related Documents:

Kodak AiO Ink Technology Deep Dive ESP 1.2 / 3.2ESP 1.2 / 3.2 Hero 9.1Hero 9.1 Hero 7.1Hero 7.1 Office Hero 6.1Office Hero 6.1 Hero 5.1Hero 5.1 Hero 3.1Hero 3.1 2011‐ 2013 ESP C110, C310, C315 ESP Office 2150/2170 ESP Office 6150 ESP 7250 ESP 9250 2010‐ ESP 3250 ESP 5250 ESP3 ESP5 ESP7 ESP9 2011 2008‐ 2009 Easyshare 5100 Easyshare 5300 .

1 subl 12 , %esp 1 function : 2 movl 3 , 8(%esp ) 2 pushl %ebp 3 movl 2 , 4(%esp ) 3 movl %esp , %ebp 4 movl 1 , (%esp ) 4 subl 16 , %esp 5 c a l l function. Pushes the base pointer (EBP) in the stack, now it’s a saved frame pointer (SFP). Moves the stack pointer (ESP) in EBP, subst

Why Vector processors Basic Vector Architecture Vector Execution time Vector load - store units and Vector memory systems Vector length - VLR Vector stride Enhancing Vector performance Measuring Vector performance SSE Instruction set and Applications A case study - Intel Larrabee vector processor

ESP 88 and ESP 00 can support a total of 4 cards per ESP for a total of 16 mic/ line inputs and 16 line outputs. The ESP-88 comes pre-loaded with two 4x4 mic/line cards, allowing two additional 4x4 cards to be added. The ESP-00 does not include these cards, so 8 audio input or output cards can be added. PC 0

PEAK PCAN-USB, PEAK PCAN-USB Pro, PEAK PCAN-PCI, PEAK PCAN-PCI Express, Vector CANboard XL, Vector CANcase XL, Vector CANcard X, Vector CANcard XL, Vector CANcard XLe, Vector VN1610, Vector VN1611, Vector VN1630, Vector VN1640, Vector VN89xx, Son-theim CANUSBlight, Sontheim CANUSB, S

The basic objects to be considered here are vector spaces of linear trans- formations, that is, a pair of vector spaces V and W and a linear subspace . The description of vector spaces of transformations of rank 1 is classical, . note will be concerned. Given any (abstract) vector space M, of dimension m, say, we may use the multiplication .

(Postgres) OSISoft Historian A-MQ. ESP Replacement Business Optimization - BPMN. ESP Demo Walkthrough ESP Vibration Simulation ESP Remote Metrics ESP Remote Control. Global visibility to Production KPIs Demo ESP Machine Health Scenario . SSL/TLS – ensure the wire is encrypted

The themes of pilgrimage and welcome are central to The Canterbury Journey. A lasting part of its legacy will be the new free-to-enter Welcome Centre with dedicated community and exhibition spaces and viewing gallery. The journey to our new centre is underway, to open in 2019. A New Welcome In 2017, the face of the Cathedral has changed .