Scienti C Computing: Solving Linear Systems

2y ago
39 Views
2 Downloads
1.52 MB
71 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Grant Gall
Transcription

Scientific Computing:Solving Linear SystemsAleksandar DonevCourant Institute, NYU1donev@courant.nyu.edu1 CourseMATH-GA.2043 or CSCI-GA.2112, Fall 2020September 17th and 24th, 2020A. Donev (Courant Institute)Lecture III9/20201 / 71

Outline1Linear Algebra Background2Conditioning of linear systems3Gauss elimination and LU factorization4Beyond GEMSymmetric Positive-Definite Matrices5Overdetermined Linear Systems6Sparse Matrices7ConclusionsA. Donev (Courant Institute)Lecture III9/20202 / 71

Linear Algebra BackgroundOutline1Linear Algebra Background2Conditioning of linear systems3Gauss elimination and LU factorization4Beyond GEMSymmetric Positive-Definite Matrices5Overdetermined Linear Systems6Sparse Matrices7ConclusionsA. Donev (Courant Institute)Lecture III9/20203 / 71

Linear Algebra BackgroundLinear SpacesA vector space V is a set of elements called vectors x V that maybe multiplied by a scalar c and added, e.g.,z αx βyI will denote scalars with lowercase letters and vectors with lowercasebold letters.Prominent examples of vector spaces are Rn (or more generally Cn ),but there are many others, for example, the set of polynomials in x.0A subspace V V of a vector space is a subset such that sums and00multiples of elements of V remain in V (i.e., it is closed).An example is the set of vectors in x R3 such that x3 0.A. Donev (Courant Institute)Lecture III9/20204 / 71

Linear Algebra BackgroundImage SpaceConsider a set of n vectors a1 , a2 , · · · , an Rm and form a matrix byputting these vectors as columnsA [a1 a2 · · · am ] Rm,n .I will denote matrices with bold capital letters, and sometimes writeA [m, n] to indicate dimensions.The matrix-vector product is defined as a linear combination ofthe columns:b Ax x1 a1 x2 a2 · · · xn an Rm .The image im(A) or range range(A) of a matrix is the subspace ofall linear combinations of its columns, i.e., the set of all b0 s.It is also sometimes called the column space of the matrix.A. Donev (Courant Institute)Lecture III9/20205 / 71

Linear Algebra BackgroundDimensionThe set of vectors a1 , a2 , · · · , an are linearly independent or form abasis for Rm if b Ax 0 implies that x 0.The dimension r dimV of a vector (sub)space V is the number ofelements in a basis. This is a property of V itself and not of the basis,for example,dim Rn nGiven a basis A for a vector space V of dimension n, every vector ofb V can be uniquely represented as the vector of coefficients x inthat particular basis,b x1 a1 x2 a2 · · · xn an .A simple and common basis for Rn is {e1 , . . . , en }, where ek has allcomponents zero except for a single 1 in position k.With this choice of basis the coefficients are simply the entries in thevector, b x.A. Donev (Courant Institute)Lecture III9/20206 / 71

Linear Algebra BackgroundKernel SpaceThe dimension of the column space of a matrix is called the rank ofthe matrix A Rm,n ,r rank A min(m, n).If r min(m, n) then the matrix is of full rank.The nullspace null(A) or kernel ker(A) of a matrix A is the subspaceof vectors x for whichAx 0.The dimension of the nullspace is called the nullity of the matrix.For a basis A the nullspace is null(A) {0} and the nullity is zero.A. Donev (Courant Institute)Lecture III9/20207 / 71

Linear Algebra BackgroundOrthogonal SpacesAn inner-product space is a vector space together with an inner ordot product, which must satisfy some properties.The standard dot-product in Rn is denoted with several differentnotations:nXx · y (x, y) hx, yi xT y xi yi .i 1For Cn we need to add complex conjugates (here ? denotes a complexconjugate transpose, or adjoint),x · y x? y nXx̄i yi .i 1Two vectors x and y are orthogonal if x · y 0.A. Donev (Courant Institute)Lecture III9/20208 / 71

Linear Algebra BackgroundPart I of Fundamental TheoremOne of the most important theorems in linear algebra is that the sumof rank and nullity is equal to the number of columns: For A Rm,nrank A nullity A n.In addition to the range and kernel spaces of a matrix, two moreimportant vector subspaces for a given matrix A are the:Row space or coimage of a matrix is the column (image) space of itstranspose, im AT .Its dimension is also equal to the the rank.Left nullspace or cokernel of a matrix is the nullspace or kernel of itstranspose, ker AT .A. Donev (Courant Institute)Lecture III9/20209 / 71

Linear Algebra BackgroundPart II of Fundamental TheoremThe orthogonal complement V or orthogonal subspace of asubspace V is the set of all vectors that are orthogonal to every vectorin V.Let V be the set of vectors in x R3 such that x3 0. Then V isthe set of all vectors with x1 x2 0.Second fundamental theorem in linear algebra:im AT (ker A) ker AT (im A) A. Donev (Courant Institute)Lecture III9/202010 / 71

Linear Algebra BackgroundLinear TransformationA function L : V W mapping from a vector space V to a vectorspace W is a linear function or a linear transformation ifL(αv) αL(v) and L(v1 v2 ) L(v1 ) L(v2 ).Any linear transformation L can be represented as a multiplication bya matrix LL(v) Lv.For the common bases of V Rn and W Rm , the product w Lvis simply the usual matix-vector product,wi nXLik vk ,k 1which is simply the dot-product between the i-th row of the matrixand the vector v.A. Donev (Courant Institute)Lecture III9/202011 / 71

Linear Algebra BackgroundMatrix algebrawi (Lv)i nXLik vkk 1The composition of two linear transformations A [m, p] andB [p, n] is a matrix-matrix product C AB [m, n]:z A (Bx) Ay (AB) xzi nXAik yk k 1pXk 1AiknXBkj xj j 1pnXXj 1Cij pXk 1!Aik Bkjxj nXCij xjj 1AIk Bkjk 1Matrix-matrix multiplication is not commutative, AB 6 BA ingeneral.A. Donev (Courant Institute)Lecture III9/202012 / 71

Linear Algebra BackgroundThe Matrix InverseA square matrix A [n, n] is invertible or nonsingular if there existsa matrix inverse A 1 B [n, n] such that:AB BA I,where I is the identity matrix (ones along diagonal, all the rest zeros).The following statements are equivalent for A Rn,n :A is invertible.A is full-rank, rank A n.The columns and also the rows are linearly independent and form abasis for Rn .The determinant is nonzero, det A 6 0.Zero is not an eigenvalue of A.A. Donev (Courant Institute)Lecture III9/202013 / 71

Linear Algebra BackgroundMatrix AlgebraMatrix-vector multiplication is just a special case of matrix-matrixmultiplication. Note xT y is a scalar (dot product).C (A B) CA CB and ABC (AB) C A (BC)ATA 1 1 T A and (AB)T BT AT A and (AB) 1 B 1 A 1 and AT 1 A 1 TInstead of matrix division, think of multiplication by an inverse:( B A 1 C 1 1AB C A A B A C A CB 1A. Donev (Courant Institute)Lecture III9/202014 / 71

Linear Algebra BackgroundVector normsNorms are the abstraction for the notion of a length or magnitude.For a vector x Rn , the p-norm iskxkp nX!1/pp xi i 1and special cases of interest are:1231PnThe 1-norm (L1 norm or Manhattan distance), kxk1 i 1 xi The 2-norm (L2 norm,qP Euclidian distance), n2kxk2 x · x i 1 xi The -norm (L or maximum norm), kxk max1 i n xi Note that all of these norms are inter-related in a finite-dimensionalsetting.A. Donev (Courant Institute)Lecture III9/202015 / 71

Linear Algebra BackgroundMatrix normsMatrix norm induced by a given vector norm:kAxkx6 0 kxkkAk sup kAxk kAk kxkThe last bound holds for matrices as well, kABk kAk kBk.Special cases of interest are:1234PnThe 1-norm or column sum norm, kAk1 maxPji 1 aij nThe -norm or row sum norm, kAk maxi j 1 aij The 2-norm or spectral norm, kAk2 σ1 (largestqP singular value)2The Euclidian or Frobenius norm, kAkF i,j aij (note this is not an induced norm)A. Donev (Courant Institute)Lecture III9/202016 / 71

Conditioning of linear systemsOutline1Linear Algebra Background2Conditioning of linear systems3Gauss elimination and LU factorization4Beyond GEMSymmetric Positive-Definite Matrices5Overdetermined Linear Systems6Sparse Matrices7ConclusionsA. Donev (Courant Institute)Lecture III9/202017 / 71

Conditioning of linear systemsMatrices and linear systemsIt is said that 70% or more of applied mathematics research involvessolving systems of m linear equations for n unknowns:nXaij xj bi ,i 1, · · · , m.j 1Linear systems arise directly from discrete models, e.g., traffic flowin a city. Or, they may come through representing or more abstractlinear operators in some finite basis (representation).Common abstraction:Ax bSpecial case: Square invertible matrices, m n, det A 6 0:x A 1 b.The goal: Calculate solution x given data A, b in the mostnumerically stable and also efficient way.A. Donev (Courant Institute)Lecture III9/202018 / 71

Conditioning of linear systemsStability analysisPerturbations on right hand side (rhs) only:A (x δx) b δbδx A 1 δb b Aδx b δb kδxk A 1 kδbkUsing the boundskbk kAk kxk kxk kbk / kAkthe relative error in the solution can be bounded byA 1 kδbkA 1 kδbkkδxkkδbk κ(A)kxkkxkkbk / kAkkbkwhere the conditioning number κ(A) depends on the matrix norm used:κ(A) kAk A 1 1.A. Donev (Courant Institute)Lecture III9/202019 / 71

Conditioning of linear systemsConditioning NumberThe full derivation, not given here, estimates the uncertainty orperturbation in the solution: kδxkκ(A)kδbk kδAk .kxkkAk1 κ(A) kδAk kbkkAkThe worst-case conditioning of the linear system is determined byκ(A).Best possible error with rounding unit u 10 16 :kδxk . 2uκ(A),kxk Solving an ill-conditioned system, κ(A) 1 (e.g., κ 1015 !) ,should only be done if something special is known.The conditioning number can only be estimated in practice sinceA 1 is not available (see MATLAB’s rcond function).A. Donev (Courant Institute)Lecture III9/202020 / 71

Gauss elimination and LU factorizationOutline1Linear Algebra Background2Conditioning of linear systems3Gauss elimination and LU factorization4Beyond GEMSymmetric Positive-Definite Matrices5Overdetermined Linear Systems6Sparse Matrices7ConclusionsA. Donev (Courant Institute)Lecture III9/202021 / 71

Gauss elimination and LU factorizationGEM: Eliminating x1A. Donev (Courant Institute)Lecture III9/202022 / 71

Gauss elimination and LU factorizationGEM: Eliminating x2A. Donev (Courant Institute)Lecture III9/202023 / 71

Gauss elimination and LU factorizationGEM: Backward substitutionA. Donev (Courant Institute)Lecture III9/202024 / 71

Gauss elimination and LU factorizationGEM as an LU factorization toolWe have actually factorized A asA LU,L is unit lower triangular (lii 1 on diagonal), and U is uppertriangular.GEM is thus essentially the same as the LU factorization method.A. Donev (Courant Institute)Lecture III9/202025 / 71

Gauss elimination and LU factorizationGEM in MATLAB% Sample MATLAB code ( f o r l e a r n i n g p u r p o s e s o n l y , n o tf u n c t i o n A MyLU(A)% LU f a c t o r i z a t i o n i n p l a c e ( o v e r w r i t e A)[ n ,m] s i z e (A ) ;i f ( n m) ; e r r o r ( ’ M a t r i x n o t s q u a r e ’ ) ; endf o r k 1:( n 1) % For v a r i a b l e x ( k )% C a l c u l a t e m u l t i p l i e r s i n column k :A ( ( k 1): n , k ) A ( ( k 1): n , k ) / A( k , k ) ;% Note : P i v o t e l e m e n t A( k , k ) assumed n o n z e r o !f o r j (k 1): n% Eliminate variable x(k ):A ( ( k 1): n , j ) A ( ( k 1): n , j ) . . .A ( ( k 1): n , k ) A( k , j ) ;endendendA. Donev (Courant Institute)Lecture III9/202026 / 71

Gauss elimination and LU factorizationPivotingA. Donev (Courant Institute)Lecture III9/202027 / 71

Gauss elimination and LU factorizationPivoting during LU factorizationPartial (row) pivoting permutes the rows (equations) of A in orderto ensure sufficiently large pivots and thus numerical stability:PA LUHere P is a permutation matrix, meaning a matrix obtained bypermuting rows and/or columns of the identity matrix.Complete pivoting also permutes columns, PAQ LU.A. Donev (Courant Institute)Lecture III9/202028 / 71

Gauss elimination and LU factorizationGauss Elimination Method (GEM)GEM is a general method for dense matrices and is commonly used.Implementing GEM efficiently and stably is difficult and we will notdiscuss it here, since others have done it for you!The LAPACK public-domain library is the main repository forexcellent implementations of dense linear solvers.MATLAB uses a highly-optimized variant of GEM by default, mostlybased on LAPACK.MATLAB does have specialized solvers for special cases of matrices,so always look at the help pages!A. Donev (Courant Institute)Lecture III9/202029 / 71

Gauss elimination and LU factorizationSolving linear systemsOnce an LU factorization is available, solving a linear system is simple:Ax LUx L (Ux) Ly bso solve for y using forward substitution.This was implicitly done in the example above by overwriting b tobecome y during the factorization.Then, solve for x using backward substitutionUx y.If row pivoting is necessary, the same applies but L or U may bepermuted upper/lower triangular matrices, e PT L U.A LUA. Donev (Courant Institute)Lecture III9/202030 / 71

Gauss elimination and LU factorizationIn MATLABIn MATLAB, the backslash operator (see help on mldivide)x A\b A 1 b,solves the linear system Ax b using the LAPACK library.Never use matrix inverse to do this, even if written as such on paper.Doing x A\b is equivalent to performing an LU factorization anddoing two triangular solves (backward and forward substitution):[L̃, U] lu(A)y L̃\bx U\yThis is a carefully implemented backward stable pivoted LUfactorization, meaning that the returned solution is as accurate as theconditioning number allows.A. Donev (Courant Institute)Lecture III9/202031 / 71

Gauss elimination and LU factorizationGEM Matlab example (1) A [ 12 b [2 1 1 ] ’ ;3 ;456 ;780 ]; x Aˆ( 1) b ; x ’ % Don ’ t do t h i s !ans 2.55562.11110.1111 x A\b ; x ’ % Do t h i s i n s t e a dans 2.55562.11110.1111 l i n s o l v e (A , b ) ’ % Even more c o n t r o lans 2.55562.11110.1111A. Donev (Courant Institute)Lecture III9/202032 / 71

Gauss elimination and LU factorizationGEM Matlab example (2) [ L , U ] l u (A) % Even b e t t e r i f r e s o l v i n gL U 1001.0000003.00004.5000 norm ( L U A , i n f )ans 0 y L\b ; x U\ y ; x ’ans 2.5556A. Donev (Courant Institute)2.1111Lecture III0.11119/202033 / 71

Gauss elimination and LU factorizationBackwards StabilityEven though we cannot get x correctly for ill-conditioned linearsystems, we can still get an (not the one!) x that is a solution of theequation to almost machine precision.This sort of backward stability means that there is a problem nearbythe original problem such that the answer we compute x̂ is thesolution of that “perturbed” problem,(A δA) x̂ b δb.A backwards stable method gives a residual r Ax b that is zeroto within the rounding unit u 10 16 ,kAx bkkAx bk u,kbkkAxkObserve that the conditioning number of the matrix does not enterhere, it can be large!A. Donev (Courant Institute)Lecture III9/202034 / 71

Gauss elimination and LU factorizationBackwards Stability contd.Gaussian elimination with partial pivoting is almost always backwardsstable in practice, but one can always check the residual aftercomputing the answer (always good practice to confirm you solvedthe problem you thought you solved!)Specifically, if we compute the LU factorization we are guaranteedthatkδAk CuA δA LU wherekAkwhere C is some modest constant that depends polynomially on thenumber of unknowns (not exponentially).Complete pivoting is rarely used in practice because it is expensive,even though it will give better guarantees.A. Donev (Courant Institute)Lecture III9/202035 / 71

Gauss elimination and LU factorizationCost estimates for GEMFor forward or backward substitution, at step k there are (n k)multiplications and subtractions, plus a few divisions.The total over all n steps isnXn(n 1)n2(n k) 22k 1subtractions and multiplications, giving a total of O(n2 )floating-point operations (FLOPs).The LU factorization itself costs a lot more, O(n3 ),FLOPS 2n3,3and the triangular solves are negligible for large systems.When many linear systems need to be solved with the same A thefactorization can be reused.A. Donev (Courant Institute)Lecture III9/202036 / 71

Beyond GEMOutline1Linear Algebra Background2Conditioning of linear systems3Gauss elimination and LU factorization4Beyond GEMSymmetric Positive-Definite Matrices5Overdetermined Linear Systems6Sparse Matrices7ConclusionsA. Donev (Courant Institute)Lecture III9/202037 / 71

Beyond GEMMatrix Rescaling and ReorderingPivoting is not always sufficient to ensure lack of roundoff problems.In particular, large variations among the entries in A should beavoided.This can usually be remedied by changing the physical units for x andb to be the natural units x0 and b0 .Rescaling the unknowns and the equations is generally a good ideaeven if not necessary:x Dx x̃ Diag {x0 } x̃ and b Db b̃ Diag {b0 } b̃.Ax ADx x̃ Db b̃ D 1b ADx x̃ b̃e D 1 ADx should have a betterThe rescaled matrix Abconditioning.Also note that reordering the variables from most important toleast important may also help.A. Donev (Courant Institute)Lecture III9/202038 / 71

Beyond GEMEfficiency of SolutionAx bThe most appropriate algorithm really depends on the properties ofthe matrix A:General dense matrices, where the entries in A are mostly non-zeroand nothing special is known: Use LU factorization.Symmetric (aij aji ) and also positive-definite matrices.General sparse matrices, where only a small fraction of aij 6 0.Special structured sparse matrices, arising from specific physicalproperties of the underlying system.It is also important to consider how many times a linear system withthe same or related matrix or right hand side needs to be solved.A. Donev (Courant Institute)Lecture III9/202039 / 71

Beyond GEMSymmetric Positive-Definite MatricesPositive-Definite MatricesA real symmetric matrix A is positive definite iff (if and only if):123All of its eigenvalues are real (follows from symmetry) and positive. x 6 0, xT Ax 0, i.e., the quadratic form defined by the matrix A isconvex.There exists a unique lower triangular L, Lii 0,A LLT ,termed the Cholesky factorization of A (symmetric LU factorization).1For Hermitian complex matrices just replace transposes with adjoints(conjugate transpose), e.g., AT A? (or AH in the book).A. Donev (Courant Institute)Lecture III9/202040 / 71

Beyond GEMSymmetric Positive-Definite MatricesCholesky FactorizationThe MATLAB built in functionR chol(A)gives the Cholesky factorization and is a good way to test forpositive-definiteness.The cost of a Cholesky factorization is about half the cost of LUfactorization, n3 /3 FLOPS.Solving linear systems is as for LU factorization, replacing U with LT .For Hermitian/symmetric matrices with positive diagonals MATLABtries a Cholesky factorization first, before resorting to LUfactorization with pivoting.A. Donev (Courant Institute)Lecture III9/202041 / 71

Beyond GEMSymmetric Positive-Definite MatricesSpecial Matrices in MATLABMATLAB recognizes (i.e., tests for) some special matricesautomatically: banded, permuted lower/upper triangular, symmetric,Hessenberg, but not sparse.In MATLAB one may specify a matrix B instead of a singleright-hand side vector b.The MATLAB functionX linsolve(A, B, opts)allows one to specify certain properties that speed up the solution(triangular, upper Hessenberg, symmetric, positive definite, none),and also estimates the condition number along the way.Use linsolve instead of backslash if you know (for sure!) somethingabout your matrix.A. Donev (Courant Institute)Lecture III9/20

Linear Algebra Background Matrix Algebra Matrix-vector multiplication is just a special case of matrix-matrix multiplication. Note xTy is a scalar (dot product). C(A B) CA CB and ABC (AB)C A(BC) A T T A and (AB) BTAT A 1 1 A and (AB) B 1A 1 and AT 1 A 1 T Instead of matrix

Related Documents:

Parallel Computing Toolbox Ordinary Di erential Equations Partial Di erential Equations Conclusion Lecture 8 Scienti c Computing: Symbolic Math, Parallel Computing, ODEs/PDEs Matthew J. Zahr CME 292 Advanced MATLAB for Scienti c Computing Stanford University 30th April 2015 CME 292: Advanced MATLAB for SC Lecture 8. Symbolic Math Toolbox .

Parallel Distributed Computing using Python Lisandro Dalcin dalcinl@gmail.com Joint work with . Python for Scienti c Computing I Scienti c computing (and particularly HPC . I High level and general purpouse computing environments (Maple, Mathematica, MATLAB) got popular since the 90's I Python is becoming increasingly popular in the .

Applied Mathematics 205 Advanced Scienti c Computing: Numerical Methods Course logistics Instructor: Chris H. Rycroft (chr@seas.harvard.edu) . please upload a PDF conversion of it to Canvas. Writing style and LATEX . Numerical Computing with MATLAB. SIAM, 2004. I L. N. Trefethen and D. Bau. Numerical Linear Algebra. SIAM, 1997. .

Cloud Computing J.B.I.E.T Page 5 Computing Paradigm Distinctions . The high-technology community has argued for many years about the precise definitions of centralized computing, parallel computing, distributed computing, and cloud computing. In general, distributed computing is the opposite of centralized computing.

If you have a linear system Ax b and B is an inverse matrix for A then the linear system has the unique solution x Bb: Solving Linear Systems Math 240 Solving Linear Systems Gauss-Jordan elimination . Solve the linear system x 1 3 2 1; 2x 1 5x 2 3: The coe cient matrix is A 1 3 2 5 , so

KEY: system of linear equations solution of a system of linear equations solving systems of linear equations by graphing solving systems of linear equations NOT: Example 2 3. 2x 2y 2 7x y 9 a. (1, 9) c. (0, 9) b. (2, 3) d. (1, 2) ANS: D REF: Algebra 1 Sec. 5.1 KEY: system of linear equations solution of a system of .

2 Solving Linear Inequalities SEE the Big Idea 2.1 Writing and Graphing Inequalities 2.2 Solving Inequalities Using Addition or Subtraction 2.3 Solving Inequalities Using Multiplication or Division 2.4 Solving Multi-Step Inequalities 2.5 Solving Compound Inequalities Withdraw Money (p.71) Mountain Plant Life (p.77) Microwave Electricity (p.56) Digital Camera (p.

A Scienti c Molding Skills Portal from Routsis Training combines foundational eLearning with critical skills-development labs. This in-house training system teaches your production personnel how to develop, document, optimize, and evaluate a robust Scienti c Molding process skills that translate directly into