Introduction To Linear Algebra - Calvin.edu

2y ago
85 Views
2 Downloads
579.73 KB
119 Pages
Last View : 15d ago
Last Download : 3m ago
Upload by : Warren Adams
Transcription

Introduction to Linear AlgebraT. ScofieldApril 11, 2013

Contents1 Solving Linear Systems of Equations1.1 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2 Matrix Multiplication and Systems of Linear Equations . . . . . .1.2.1 Several interpretations of matrix multiplication . . . . . .1.2.2 Systems of linear equations . . . . . . . . . . . . . . . . . .1.3 Affine transformations of R2 . . . . . . . . . . . . . . . . . . . . . .1.4 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . .1.4.1 Examples of the method . . . . . . . . . . . . . . . . . . . .1.4.2 Finding an inverse matrix . . . . . . . . . . . . . . . . . . .1.5 LU Factorization of a Matrix . . . . . . . . . . . . . . . . . . . . . .1.6 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.6.1 The planar case . . . . . . . . . . . . . . . . . . . . . . . . .1.6.2 Calculating determinants for n-square matrices, with n 21.6.3 Some facts about determinants . . . . . . . . . . . . . . . .1.6.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . .1.7 Linear Independence and Matrix Rank . . . . . . . . . . . . . . . .1.8 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . .1199141619202627313132363738432 Vector Spaces2.1 Properties and Examples of Vector Spaces2.1.1 Properties of Rn . . . . . . . . . . .2.1.2 Some non-examples . . . . . . . .2.2 Vector Subspaces . . . . . . . . . . . . . .2.3 Bases and Dimension . . . . . . . . . . . .696969717274.3 Orthogonality and Least-Squares Solutions3.1 Inner Products, Norms, and Orthogonality . . . . . . .3.1.1 Inner products . . . . . . . . . . . . . . . . . . .3.1.2 Orthogonality . . . . . . . . . . . . . . . . . . . .3.1.3 Inner product spaces . . . . . . . . . . . . . . . .3.2 The Fundamental Subspaces . . . . . . . . . . . . . . . .3.2.1 Direct Sums . . . . . . . . . . . . . . . . . . . . .3.2.2 Fundamental subspaces, the normal equations,solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .and. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .least-squares. . . . . . . . .8787878891939395iii

Contents4 Selected Answers to Exercisesiv105

1 Solving Linear Systems of Equations1.1 Matrix AlgebraDefinition 1: An m-by-n real matrix is a table of m rows and n columns of realnumbers. We say that the matrix has dimensions m-by-n.The plural of matrix is matrices.Remarks:1. Often we write a matrix A (ai j ), indicating that the matrix under considerationmay be referred to as a single unit by the name A, but that one may also refer to theentry in the ith row, jth column as ai j .2. If one of the matrix dimensions m or n is equal to 1, it is common to call the table avector (or column vector, if n 1; a row vector if m 1). Though column vectors arejust special matrices, it is common to use lowercase boldface letters for them (like u,v, x, etc.), reserving uppercase boldface letters for other types of matrices. When x isan n-by-1 vector, we often denote its components with singly-subscripted non-boldletters—x1 for the first component, x2 for the 2nd , and so on.Practitioners carry out large-scale linear algebraic computations using software, andin this section we will alternate between discussions of concepts, and demonstrationsof corresponding implementations in the software package Sage. To create a matrix (orvector) in Sage, you call the matrix() function and enclose elements in square brackets ([and ]). Elements on the same row should be separated only by a comma. When you wishto start a new row, you indicate this using a ”], [” to close off one row and begin another.So, to enter the matrices 4 hi 1 3 0 1 5 2 , , and 1 5 , 3 2 1 7you can type/execute1

1 Solving Linear Systems of Equationsprint matrix(QQ, [[1, 5, -2]])A matrix(QQ, [[4], [-1], [3], [7]])A.show()B matrix(QQ, [[3, 0], [-1, 5], [3, 1]])matrix(QQ, 2, 3)# a two-by-three consisting of zeros onlyC matrix(QQ, 3, 4, [1,3,2,4,3,5,4,6,5,7,6,8])# 3-by-4 matrix, these entriesOne can easily extract whole submatrices from a matrix. If you want to grab certainrows from the matrix B, you achieve this using the matrix from rows() ”method”:B.matrix from rows([0])B.matrix from rows([1,2])B.matrix from rows([2,1])# grabs 1st row, as Sage does zero-indexing# grabs row 2 and row 3# same as above, but rows are resequenced# The same results, but with less typingB[0,:]B[[1,2],:]B[[2,1],:]The matrix from columns() method is for extracting columns from a matrix, while thematrix from rows and columns() method allows for the extraction of submatrices located at precise locations (rows and columns):B.matrix from columns([1])# extracts column 2 from matrix BB.matrix from rows and columns([2,0],[1])# try this and see!# Again, a quicker way to get these same resultsB[:,1]B[[2,0],1]Examples above show how to build a matrix one element at a time, specifying everyentry. If you have a matrix that, but for a couple of entries, is full of zeros, you might trythe following:R matrix(QQ, 3, 4)R[2,1] -1# makes a 3-by-4 matrix full of zeros# places (-1) in row 3, column 2You get a matrix whose entries are all ones viaW ones matrix(QQ, 4, 2)Y ones matrix(QQ, 4)2# resulting matrix is 4-by-2# resulting matrix is 4-by-4

1.1 Matrix AlgebraA special class among the square matrices (i.e., those having equal numbers of rowsand columns) are the diagonal matrices. Such a matrix A (ai j ) is one whose entries ai jare zero whenever i , j. The diagonal matrix() function makes it easy to construct sucha matrix in Sage. Try out these commands individually and view their results:diagonal matrix(QQ, (3, -1, 2))diagonal matrix(QQ, 4, (3, -1, 2))diagonal matrix(QQ, 5, (0, 3, -1, 0, 2))identity matrix(4)-3*identity matrix(5)One can also use the block matrix() command to join smaller matrices into bigger ones.Try these commands:A matrix(QQ, 3, 3, [-2, 1, 0, 1, -2, 1, 0, 1, -2])show(A)id identity matrix(3)block matrix([[A, id], [id, A]])The title of this section is “Matrix Algebra”. In any algebraic system we must knowwhat one means by the word equality.Definition 2: Two matrices A (ai j ) and B (bi j ) are said to be equal if theirdimensions are equal, and if the entries in every location are equal.Example 1:The two vectorshi3 1 2 5and 3 1 2 5cannot be considered equal, since they have different dimensions. While the entriesare the same, the former is a row vector and the latter a column vector.In this course, the term vector will be synonomous with column vector. The set of vectorshaving n components, all of which are real numbers, will be called Rn , or sometimesEuclidean n-space. The elements of Rn are n-by-1 matrices, sometimes called n-vectors.However, as it takes less room out of a page to list the contents of a vector horizontally3

1 Solving Linear Systems of Equationsrather than vertically, we will often specify an n-vector horizontally using parentheses, asinx (x1 , x2 , . . . , xn ) .The most fundamental algebraic operations on matrices are as follows:1. Addition of Two Matrices.Given two m-by-n matrices A (ai j ) and B (bi j ), we define their sum A B to bethe m-by-n matrix whose entries are (ai j bi j ). That is, a11 a12 · · · a21 a22 · · · . . am1 am2 · · · a1n b11 b12 · · · a2n b21 b22 · · · . . . amn bm1 bm2 · · · b1n a11 b11 a12 b12 · · · a b21 a22 b22 · · ·b2n : 21. . . . bmnam1 bm1 am2 bm2 · · · a1n b1n a2n b2n . . amn bmnIn order to add two matrices, they must have the same number of rows and columns(i.e., be matrices with the same dimensions). Note that this is not the same as sayingthey must be square matrices!It is simple to add two matrices in Sage. One possibility is code likeA matrix(QQ, 2, 3, [3, 1, 6, 1, 2, -1])A matrix(QQ, 2, 3, [1, 1, 1, 1, 1, 1])which creates a 2-by-3 matrix A, and then adds to it another 2-by-3 matrix whoseentries are all ones.2. Multiplication of a Matrix by a Scalar.Given an m-by-n matrix A (ai j ) and a scalar c, we define the scalar multiple cA tobe the m-by-n matrix whose entries are (cai j ). That is, a11 a12 · · · a21 a22 · · · c . . am1 am2 · · · a1n ca11 ca12 · · · caa2n ca22 · · · : 21. . . . . amncam1 cam2 · · · ca1n ca2n . . camnOur definitions for matrix addition and scalar multiplication have numerous implications. They include the following:a) Matrix subtraction is merely a combination of matrix addition and scalar multiplication by (-1): A B : A ( 1)B.b) Distributive laws between matrix addition and scalar multiplication hold:4

1.1 Matrix Algebrai. c(A B) cA cB.ii. (c d)A cA dA.c) An appopriately-sized matrix whose entries are all zeros serves as an additive identity(or zero matrix, denoted in boldface by 0). That is, A 0 A.d) Scalar multiplication by 0 produces the zero matrix 0. That is, (0)A 0.3. Multiplication of Two MatricesWhen we multiply two matrices, the product is a matrix whose elements arise fromdot products1 between the rows of the first (matrix) factor and columns of the second.An immediate consequence of this: if A and B are matrices, the product AB makessense precisely when the number of columns in A is equal to the number of rowsin B. To be clearer about how such a matrix product is achieved, suppose A is anm-by-n matrix while B is an n p matrix. If we write r1 r2 A . . rm and"c c ···B 1 2 #cp, with each of the rows ri of A having n components and likewise each of the columnscj of B, then their product is an m p matrix whose entry in the ith -row, jth -columnis obtained by taking the dot product of ri with cj . Thus if 2 1 03 A 5 1 7 41"and#3 1 0B , 2 4 10The dot product of two vectors is a concept from vector calculus, studied primarily in the case where thosevectors have just two components. It appears as well in elementary physics courses.5

1 Solving Linear Systems of Equationsthen the product AB will be the 4 3 matrix AB #" #"#310(2, 1) ·(2, 1) ·(2, 1) · 2410"#" #"#310(0, 3) ·(0, 3) ·(0, 3) · 2410"#" #"#310( 5, 1) ·( 5, 1) ·( 5, 1) · 2410"#" #"#310(7, 4) ·(7, 4) ·(7, 4) · 2410 8 6 17 29" 2 10 12 30 . 1 10 9 40Remarks: When we write AB, where A, B are appropriately-sized matrices, we will meanthe product of these two matrices using multiplication as defined above. InSage, you must be careful to include the multiplication symbol (since AB is avalid variable name), as inA matrix(QQ, [[2, -1], [0,3], [-5,1],[7,-4]])B matrix(QQ, [[3,1,0],[-2,4,10]])A*B Notice that, if A is 4-by-2 and B is 2-by-3, then the product AB is defined, but theproduct BA is not. This is because the number of columns in B is unequal to thenumber of rows in A. Thus, for it to be possible to multiply two matrices, oneof which is m-by-n, in either order, it is necessary that the other be n-by-m. Evenwhen both products AB and BA are possible, however, matrix multiplication isnot commutative. That is, AB , BA, in general. We do have a distributive law for matrix multiplication and addition. In particular, A(B C) AB AC, for all appropriately-sized matrices A, B, C. When an m-by-n matrix A is multiplied by an n-by-1 (column) vector (an nvector, for short), the result is an m-vector. That is, for each n-vector v, Avis an m-vector. It is natural to think of left-multiplication by A as a mapping(or function) which takes n-vectors v as inputs and produces m-vectors Avas outputs. Of course, if B is an -by-m matrix, then one can left-multiplythe product Av by B to get B(Av). The manner in which we defined matrix6

1.1 Matrix Algebraproducts ensures that things can be grouped differently with no change in theanswer—that is, so(BA)v B(Av) . Notice that the n-by-n matrixIn 1 0 : 0 . . 00 0 ···1 0 ···0 1 ···. . . . .0 0 ··· 0 0 0 . . 1has the property that, whenever C is an n-by-p matrix (so that the product In Cmakes sense), it is the case that In C C. Moreover, if B is an m-by-n matrix, thenBIn B. Since multiplication by In does not change the matrix (or vector) withwhich you started, In is called the n-by-n identity matrix. In most instances, wewill write I instead of In , as the dimensions of I should be clear from context.In Sage, the function that returns the n-by-n identity matrix is identity matrix(). This explains the result of the commandsB matrix(QQ, 2, 3, [1,2,3,2,3,-1])B * identity matrix(3) For a square (n-by-n) matrix A, there may be a corresponding n-by-n matrix Bhaving the property thatAB BA In .If so, the matrix A is said to be nonsingular or invertible, with inverse matrixB. Usually the inverse of A, when it exists, is denoted by A 1 . This relationshipis symmetric, so if B is the inverse of A, then A is the inverse of B as well. If Ais not invertible, it is said to be singular.The following fact about the product of invertible matrices is easily proved.Theorem 1: Suppose A, B are both n-by-n invertible matrices. Thentheir product AB is invertible as well, having inverse (AB) 1 B 1 A 1 .When A is invertible, it is not so easy to find A 1 as one might think. Executethe commands below. Would you have be able to guess the contents of theinverse matrix B?A matrix(QQ, [[1, 2, 3], [2, 3, -1], [1, 0, -2]])B A.inverse()7

1 Solving Linear Systems of Equationsshow(B)show(B.n())show(A*B)B*A4. Transposition of a MatrixLook closely at the two matrices 2 0 1 1 A 3 1 1 1 2 2 0 1and 1 3 2 2 1 2 B 10 0 1 1 1for a connection between the two. The matrix B has been formed from A so thatthe first column of A became the first row of B, the second column of A becamethe 2nd row of B, and so on. (One might say with equal accuracy that the rows ofA became the columns of B, or that the rows/columns of B are the columns/rowsof A.) The operation that produces this matrix B from (given) matrix A is calledtransposition, and matrix B is called the transpose of A, denoted as B AT . (Note:In some texts the prime symbol is used in place of the T , as in B A0 .)When you already have a matrix A defined in Sage, the transpose() method willtake produce its transpose:A matrix(QQ, 2, 3, [1, 2, 3, 2, 3, -1])show(A)C A.transpose()show(C)Remarks: If A is an m-by-n matrix, then AT is n-by-m. Some facts which are easy to prove about matrix transposition are the following:(i) For all matrices A it is the case that (AT )T A.(ii) Whenever two matrices A and B can be added, it is the case that (A B)T AT BT .(iii) Whenever the product AB of two matrices A and B is defined, it is the casethat (AB)T BT AT .(Compare this result to Theorem 1, a similar-looking fact about the inverseof the product of two invertible matrices.)(iv) For each invertible matrix A, AT is invertible as well, with (AT ) 1 (A 1 )T . There are some matrices A for which AT A. Such matrices are said to besymmetric.8

1.2 Matrix Multiplication and Systems of Linear Equations1.2 Matrix Multiplication and Systems of Linear Equations1.2.1 Several interpretations of matrix multiplicationIn the previous section we saw what is required (in terms of matrix dimensions) in orderto be able to produce the product AB of two matrices A and B, and we saw how to producethis product. There are several useful ways to conceptualize this product, and in this firstsub-section we will investigate them. We first make a definition.Definition 3: Let A1 , A2 , . . . , Ak be matrices all having the same dimensions. Foreach choice of real numbers c1 , . . . , ck , we callc1 A1 c2 A2 · · · ck Aka linear combination of the matrices A1 , . . . , Ak . The set of all such linear combinationsS : {c1 A1 c2 A2 · · · ck Ak c1 , . . . , ck R}is called the linear span (or simply span) of the matrices A1 , . . . , Ak . We sometimeswrite S span({A1 , . . . , Ak }).9

1 Solving Linear Systems of EquationsHere, now, are several different ways to think about product AB of two appropriatelysized matrices A and B.1. Block multiplication. This is the first of four descriptions of matrix multiplication,and it is the most general. In fact, each of the three that follow is a special case ofthis one.Any matrix (table) may be separated into blocks (or submatrices) via horizontal andvertical lines. We first investigate the meaning of matrix multiplication at the blocklevel when the left-hand factor of the matrix product AB has been subdivided usingonly vertical lines, while the right-hand factor has correspondingly been blockedusing only horizontal lines.Example 2:Suppose 8 8 3 4 5 A 6 6 1 8 6 5 3 4 2 7 hi A1 A2 A3 (Note how we have named the three blocks found in A!), and 3 5 5 2 2 2 2 7 B1 03 B2 .B 6 6 B3 3 2 5 0 0 1 1 4ThenAB A1 B1 A2 B2 A3 B3 # 3 4 8 8 "hi 3 5 5 2 1 6 6 0 3 8 6 6 2 22 7 245 3 8 24 24 72 18 18 0 9 12 13 30 42 42 30 6 6 0 3 24 22 9 19 19 31 24 24 0 12 6 3 14 29 9 43 12 26 8 57 . 39 40 36 9 #5 " 3 2 5 0 6 0 1 1 47 15 20 34 24 17 28While we were trying to keep things simple in the previous example by drawing onlyvertical lines in A, the number and locations of those vertical lines was somewhat10

1.2 Matrix Multiplication and Systems of Linear Equationsarbitrary. Once we chose how to subdivide A, however, the horizontal lines in B hadto be drawn to create blocks with rows as numerous as the columns in the blocks ofA.Now, suppose we subdivide the left factorSay that A11 A A21 A31with both horizontal and vertical lines.A12A22A32 . Where the vertical line is drawn in A continues to dictate where a horizontal linemust be drawn in the right-hand factor B. On the other hand, if we draw any verticallines in to create blocks in the right-hand factor B, they can go anywhere, paying noheed to where the horizontal lines appear in A. Say that#"B11 B12 B13 B14.B B21 B22 B23 B24Then A11 A12 " B11AB A21 A22 B21A31 A32 A11 B11 A12 B21 A21 B11 A22 B21 A31 B11 A32 B21B12 B13 B14B22 B23 B24#A11 B12 A12 B22 A11 B13 A12 B23 A11 B14 A12 B24A21 B12 A22 B22 A21 B13 A22 B23 A21 B14 A22 B24A31 B12 A32 B22 A31 B13 A32 B23 A31 B14 A32 B24 . Example 3:Suppose A, B are the same as in Example 2. Let’s subdivide A in the following(arbitrarily chosen) fashion: "# 8 8 3 4 5 A11 A12 A 6 6 1 8 6 .A21 A22 5 3 4 2 7Given the position of the vertical divider in A, we must place a horizontaldivider in B as shown below. Without any requirements on where verticaldividers appear, we choose (again arbitrarily) not to have any. 3 5 5 2 2 2 2 7 #" B103 B 6 6.B2 3 2 5 0 0 1 1 411

1 Solving Linear Systems of EquationsThen"AB A11 B1 A12 B2A21 B1 A22 B2# 3 hi 2 883 4 6 3 3 " # 6 6 1 8 2 5 3 4 2 6 3 14 29 9 43 12 26 8 57 . 39 40 36 95 2625 262 520 5 520 5 2 hi 7 5 0 1 1 4 3 0 2 " # i 7 6 h 0 1 1 4 3 7 02. Sums of rank-one matrices. Now let us suppose that A has n columns and B hasn rows. Suppose also that we block (as described allowed for in the previous caseabove) A by column—one column per block—and correspondingly B by row:A hA1 A2 · · ·Aniand B B1B2.Bn . Following Example 2, we getAB A1 B1 A2 B2 · · · An Bn nXA jB j .(1.1)j 1The only thing new here to say concerns the individual products A j B j themselves,in which the first factor A j is a vector in Rm and the 2nd B j is the transpose of a vectorin Rp (for some m and p).So, take u Rm and v Rp . Since u is m-by-1 and vT is 1-by-p, the product uvT ,called the outer product of u and v, makes sense, yielding an m-by-p matrix.12

1.2 Matrix Multiplication and Systems of Linear EquationsExample 4:Given u ( 1, 2, 1) and v (3, 1, 1, 4), their vector outer product is 1 h 3 1 1 4 i 2 2 8 .uvT 2 3 1 1 4 6 131 1 4If you look carefully at the resulting outer product in the previous example, youwill notice it has relatively simple structure—its 2nd through 4th columns are simplyscalar multiples of the first, and the same may be said about the 2nd and 3rd rowsin relation to the 1st row. Later in these notes, we will define the concept of therank of a matrix. Vector outer products are always matrices of rank 1 and thus, by(1.1), every matrix product can be broken into the sum of rank-one matrices.3. Linear combinations of columns of A. Suppose B has p columns, and we partitionit in this fashion (Notice that B j represents the jth column of B instead of the jth row,as it meant above!):hiB B1 B2 · · · Bp .This partitioning by vertical lines of the right-hand factor in the matrix product ABdoes not place any constraints on how A is partitioned, and so we may writehihiAB A B1 B2 · · · Bp AB1 AB2 · · · ABp .That is, for each j 1, 2, . . . , p, the jth column of AB is obtained by left-multiplyingthe jth column of B by A.Having made that observation, let us consider more carefully what happens whenA—suppose it has n columns A1 , A2 , . . . , An —multiplies a vector v Rn . (Note thateach B j is just such a vector.) Blocking A by columns, we haveAv hA1 A2 · · · i An v1v2.vn v A v A · · · v A .2 2n n 1 1 That is, the matrix-vector product Av is simply a linear combination of the columnsof A, with the scalars multiplying these columns taken (in order, from top to bottom)from v. The implication for the matrix product AB is that each of its columns AB j is alinear combination of the columns of A, with coefficients taken from the jth columnof B.13

1 Solving Linear Systems of Equations4. Linear combinations of rows of B. In the previous interpretation of matrix multiplication, we begin with a partitioning of B via vertical lines. If, instead, we beginwith a partitioning of A, a matrix with m rows, via horizontal lines, we get A1 A1 B A2 A2 B AB . B . . . . AmAm BThat is, the jth row of the matrix product AB is obtained from left-multiplying theentire matrix B by the jth row (considered as a submatrix) of A.If A has n columns, then each A j is a 1-by-n matrix. The effect of multiplying a 1-by-nmatrix V by an n-by-p matrix B, using a blocking-by-row scheme for B, is B1 i B2 h v B v B · · · v B ,VB v1 v2 · · · vn 2 2n n 1 1 · · · Bna linear combination of the rows of B. Thus, for each j 1, . . . , m, the jth row A j Bof the matrix product AB is a linear combination of the rows of B, with coefficientstaken from the jth row of A.1.2.2 Systems of linear equationsMotivated by Viewpoint 3 concerning matrix multiplication—in particular, thatAx x1 A1 x2 A2 · · · xn An ,where A1 , . . . , An are the columns of a matrix A and x (x1 , . . . , xn ) Rn —we make thefollowing definition.hiDefinition 4: Suppose A A1 A2 · · · An , where each submatrix A j consists of a single column (so A has n columns in all). The set of all possible linearcombinations of these columns (also known as span({A1 , . . . , An })){c1 A1 c2 A2 · · · cn An c1 , c2 , . . . , cn R} ,is called the column space of A. We use the symbol col(A) to denote the columnspace.14

1.2 Matrix Multiplication and Systems of Linear EquationsThe most common problem in linear algebra (and the one we seek in this course tounderstand most completely) is the one of solving m linear equationsa11 x1 a12 x2 . . . a1n xna21 x1 a22 x2 . . . a2n xn. b1b2.(1.2)am1 x1 am2 x2 . . . amn xn bmin the n unknowns x1 , . . . , xn . If one uses the coefficients and unknowns to build acoefficient matrix and vectors b1 x1 a11 a12 · · · a1n b2 x2 a21 a22 · · · a2n andb . ,A . , x . . . . ···. bnxnam1 am2 · · · amnthen by our definitions of matrix equality and multiplication, the system (1.2) may beexpressed more concisely as the matrix equationAx b ,(1.3)where the vector b is known and x is to be found. Given Viewpoint 3 for conceptualizingmatrix multiplication above, problem (1.3) really presents two questions to be answered:(I) Is b in the column space of A (i.e., is (1.3) solvable)?(II) If it is, then what are the possible n-tuples x (x1 , . . . , xn ) of coefficients so that thelinear combinationx1 A1 x2 A2 · · · xn Anof the columns of A equals b?When the number m of equations and the number n of unknowns in system (1.2) areequal, it is often the case that there is one unique answer for each of the variables xi (or,equivalently, one unique vector x satisfying (1.3). Our main goal in the linear algebracomponent of this course is to understand completely when (1.3) is and is not solvable,how to characterize solutions when it is, and what to do when it is not.One special instance of the case m n is when A is nonsingular. In this case, if A 1 isknown, then the answer to question (I) is an immediate “yes”. Moreover, one may obtainthe (unique) solution of (1.3) (thus answering question (II)) via left-multiplication by A 1 :Ax b A 1 Ax A 1 b Ix A 1 b x A 1 b .15

1 Solving Linear Systems of EquationsImportant Note: One should not think about the previous matrix-algebraicsteps in terms of dividing by a matrix (and it is complete nonsense to talkabout dividing by a vector!). One speaks, instead, of multiplying by the inversematrix, when that exists. It is, moreover, extremely important to pay attentionto which side of an expression you wish to multiply by that inverse. Oftenplacing it on the wrong side yields a nonsensical mathematical expression!In practical settings, however, A 1 must first be found (if, indeed, it exists!) before we canuse it to solve the matrix problem. Despite the availability of the matrix method inverse()in Sage, finding the inverse of a matrix is a very inefficient thing to do computationally,and quite impossible when A 1 does not exist. In the Section 1.4 we will look at Gaussianelimination as a procedure for solving linear systems of equations. Gaussian eliminationserves as a foundation for the LU-factorization, which supplies us with a comprehensivemethod for solving Ax b whenever the matrix problem can be solved (even in caseswhere A 1 does not exist).1.3 Affine transformations of R2Suppose A is an m-by-n matrix. When we left-multiply a vector v Rn by such a matrixA, the result is a vector Av Rm . In this section we will focus upon functions which takeinputs v Rn and produce outputs Av Rm . A function such as this could be givena name, but we will generally avoid doing so, referring to it as “the function v 7 Av”.When we wish to be explicit about the type of objects the input and output are, we mightwrite “(v 7 Av) : Rn Rm ”, which points out that the function v 7 Av maps objectsfrom Rn (inputs) to objects from Rm (outputs). But if the reader is informed that A is anm-by-n matrix, he should already be aware that inputs/outputs to and from the functionv 7 Av are in Rn and Rm respectively.In this subsection A will be understood to be a 2-by-2 matrix. Assuming this, it is thecase that (v 7 Av) : R2 R2 . We wish to focus our attention on the action of such afunction on the entire plane of vectors for various types of 2-by-2 matrices A.1. Rotations of the plane. Our first special family of matrices are those of the form"#cos α sin αA ,sin α cos α(1.4)for α R. We know that points in the plane may be specified using polar coordinates,so any vector v R2 may be expressed as v (r cos θ, r sin θ), where (r, θ) is a polarrepresentation of the terminal point of v. To the see the action of A on a typical v,16

1.3 Affine transformations of R2note that"#"#"#cos α sin α r cos θcos α cos θ sin α sin θAv rsin α cos α r sin θsin α cos θ cos α sin θ"#r cos(α θ) .r sin(α θ)where we have employed several angle sum formulas2 in the last equality. Thatis, for an input vector v with terminal point (r, θ), the output Av is a vector withterminal point (r, α θ). The output is the same distance r from the origin as theinput, but has been rotated about the origin through an angle α. Thus, for matricesof the form (1.4), the function v 7 Av rotates the entire plane counterclockwise (forpositive α) about the origin through an angle α. Of course, the inverse matrix wouldreverse this process, and hence it must be" 1A#"#cos( α) sin( α)cos(α) sin(α) .sin( α) cos( α) sin(α) cos(α)2. Reflections across a line containing the origin.First notice that, when"#1 0A ,0 1then"#" #"#1 0 v1v1Av .0 1 v2 v2Thus, for this special matrix A, v 7 Av maps points in the plane to their reflectionsthrough the x-axis.Now let u (cos θ, sin θ) (i.e., u is a unit vector). Every line in the plane containingthe origin may be expressed as a one-parameter family L {tu t R} of multiplesof u where θ has been chosen (fixed

1 Solving Linear Systems of Equations 1.1 Matrix Algebra Definition 1: An m-by-n real matrix is a table of m rows and n columns of real numbers. We say that the matrix has dimensions m-by-n. The plural of matrix is matrices. Remarks: 1.Often we write a matrix A (a ij), indicating that the matrix under consideration may be refer

Related Documents:

Calvin pissing on a Grateful Dead “teddy bear picnic” t-shirt Calvin pissing on a stovepipe hat Calvin pissing on a one-hundred dollar bill Calvin pissing on a man exhaling vapor from an e-cigarette Calvin pissing on a baby’s

Robert Gerver, Ph.D. North Shore High School 450 Glen Cove Avenue Glen Head, NY 11545 gerverr@northshoreschools.org Rob has been teaching at . Algebra 1 Financial Algebra Geometry Algebra 2 Algebra 1 Geometry Financial Algebra Algebra 2 Algebra 1 Geometry Algebra 2 Financial Algebra ! Concurrently with Geometry, Algebra 2, or Precalculus

INTRODUCTION TO LINEAR ALGEBRA AND S-LINEAR ALGEBRA 1.1 Basic properties of linear algebra 7 1.2 Introduction to s-linear algebra 15 1.3 Some aapplications of S-linear algebra 30 Chapter Two INTRODUCTORY COCEPTS OF BASIC BISTRUCTURES AND S-BISTRUCTU

High-level description of course goals: 1. linear algebra theory; 2. linear algebra computa-tional skills; 3. introduction to abstract math. Today’s topic: introduction to linear algebra. Conceptually, linear algebra is about sets of quantities (a.k.a. vectors

Sep 07, 2020 · 06 - Linear Algebra Review De ning Matrices Basic Matrix Operations Special Types of Matrices Matrix Inversion Properties of Matrices Operations of Matrices Simple Linear Regression References OverviewI We wrap up the math topics by reviewing some linear algebra concepts Linear algebra

MTH 210: Intro to Linear Algebra Fall 2019 Course Notes Drew Armstrong Linear algebra is the common denominator of modern mathematics. From the most pure to the most applied, if you use mathematics then you will use linear algebra. It is also a relatively new subject. Linear algebra as we

results- rst approach to machine learning, and linear algebra is not the rst step, but perhaps the second or third. Practitioners Study Too Much Linear Algebra When practitioners do circle back to study linear algebra, they learn far more of the eld than is required for or relevant to machine learning. Linear algebra is a large eld of study

A First Course in Linear Algebra presents an introduction to the fascinating subject of linear algebra for students who have a reasonable understanding of basic algebra. Major topics of linear algebra are pre-sented in detail, with proofs of important theorems provided. Separate sections may be included in which