Iterative Methods For Sparse Linear Systems

8m ago
18 Views
1 Downloads
3.58 MB
460 Pages
Last View : 1m ago
Last Download : 7m ago
Upload by : Ronnie Bonney
Transcription

Iterative Methodsfor SparseLinear SystemsYousef Saad615124951114821031371Copyright c 2000 by Yousef Saad.S ECOND EDITION WITH CORRECTIONS . JANUARY 3 RD , 2000.

PREFACExiiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Suggestions for Teaching . . . . . . . . . . . . . . . . . . . . . . . . .1BACKGROUND IN LINEAR ALGEBRA1.11.21.31.41.51.61.71.8Matrices . . . . . . . . . . . . . . . . . . .Square Matrices and Eigenvalues . . . . . .Types of Matrices . . . . . . . . . . . . . .Vector Inner Products and Norms . . . . . .Matrix Norms . . . . . . . . . . . . . . . .Subspaces, Range, and Kernel . . . . . . . .Orthogonal Vectors and Subspaces . . . . .Canonical Forms of Matrices . . . . . . . .1.8.1 Reduction to the Diagonal Form . .1.8.2 The Jordan Canonical Form . . . . .1.8.3 The Schur Canonical Form . . . . .1.8.4 Application to Powers of Matrices .1.9 Normal and Hermitian Matrices . . . . . . .1.9.1 Normal Matrices . . . . . . . . . .1.9.2 Hermitian Matrices . . . . . . . . .1.10 Nonnegative Matrices, M-Matrices . . . . .1.11 Positive-Definite Matrices . . . . . . . . . .1.12 Projection Operators . . . . . . . . . . . . .1.12.1 Range and Null Space of a Projector1.12.2 Matrix Representations . . . . . . .1.12.3 Orthogonal and Oblique Projectors .1.12.4 Properties of Orthogonal Projectors .1.13 Basic Concepts in Linear Systems . . . . . .1.13.1 Existence of a Solution . . . . . . .1.13.2 Perturbation Analysis . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . 383941DISCRETIZATION OF PDES442.1444547Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . .2.1.1 Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . .2.1.2 The Convection Diffusion Equation . . . . . . . . . . . . . . .

2.2Finite Difference Methods . . . . . . . . . . . . . . . .2.2.1 Basic Approximations . . . . . . . . . . . . . .2.2.2 Difference Schemes for the Laplacean Operator2.2.3 Finite Differences for 1-D Problems . . . . . .2.2.4 Upwind Schemes . . . . . . . . . . . . . . . .2.2.5 Finite Differences for 2-D Problems . . . . . .2.3 The Finite Element Method . . . . . . . . . . . . . . .2.4 Mesh Generation and Refinement . . . . . . . . . . . .2.5 Finite Volume Method . . . . . . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . .3.BASIC ITERATIVE METHODSJacobi, Gauss-Seidel, and SOR . . . . . . . .4.1.1 Block Relaxation Schemes . . . . . .4.1.2 Iteration Matrices and Preconditioning4.2 Convergence . . . . . . . . . . . . . . . . . .4.2.1 General Convergence Result . . . . .4.2.2 Regular Splittings . . . . . . . . . . .4.2.3 Diagonally Dominant Matrices . . . .4.2.4 Symmetric Positive Definite Matrices4.2.5 Property A and Consistent Orderings .4.3 Alternating Direction Methods . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . .PROJECTION METHODS5.2Basic Definitions and Algorithms . .5.1.1 General Projection Methods5.1.2 Matrix Representation . . . .General Theory . . . . . . . . . . .5.2.1 Two Optimality Results . . 561636668Introduction . . . . . . . . . . . . . . . . .Graph Representations . . . . . . . . . . . .3.2.1 Graphs and Adjacency Graphs . . .3.2.2 Graphs of PDE Matrices . . . . . .3.3 Permutations and Reorderings . . . . . . . .3.3.1 Basic Concepts . . . . . . . . . . .3.3.2 Relations with the Adjacency Graph3.3.3 Common Reorderings . . . . . . . .3.3.4 Irreducibility . . . . . . . . . . . .3.4 Storage Schemes . . . . . . . . . . . . . . .3.5 Basic Sparse Matrix Operations . . . . . . .3.6 Sparse Direct Solution Methods . . . . . . .3.7 Test Problems . . . . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . .5.SPARSE .122123124126126

5.2.2 Interpretation in Terms of Projectors5.2.3 General Error Bound . . . . . . . .5.3 One-Dimensional Projection Processes . . .5.3.1 Steepest Descent . . . . . . . . . .5.3.2 Minimal Residual (MR) Iteration . .5.3.3 Residual Norm Steepest Descent . .5.4 Additive and Multiplicative Processes . . . .Exercises and Notes . . . . . . . . . . . . . . . .6.KRYLOV SUBSPACE METHODS – PART I6.16.26.3Introduction . . . . . . . . . . . . . . . . . . . . . . .Krylov Subspaces . . . . . . . . . . . . . . . . . . . .Arnoldi’s Method . . . . . . . . . . . . . . . . . . . .6.3.1 The Basic Algorithm . . . . . . . . . . . . . .6.3.2 Practical Implementations . . . . . . . . . . . .6.4 Arnoldi’s Method for Linear Systems (FOM) . . . . . .6.4.1 Variation 1: Restarted FOM . . . . . . . . . . .6.4.2 Variation 2: IOM and DIOM . . . . . . . . . .6.5 GMRES . . . . . . . . . . . . . . . . . . . . . . . . .6.5.1 The Basic GMRES Algorithm . . . . . . . . .6.5.2 The Householder Version . . . . . . . . . . . .6.5.3 Practical Implementation Issues . . . . . . . .6.5.4 Breakdown of GMRES . . . . . . . . . . . . .6.5.5 Relations between FOM and GMRES . . . . .6.5.6 Variation 1: Restarting . . . . . . . . . . . . .6.5.7 Variation 2: Truncated GMRES Versions . . . .6.6 The Symmetric Lanczos Algorithm . . . . . . . . . . .6.6.1 The Algorithm . . . . . . . . . . . . . . . . . .6.6.2 Relation with Orthogonal Polynomials . . . . .6.7 The Conjugate Gradient Algorithm . . . . . . . . . . .6.7.1 Derivation and Theory . . . . . . . . . . . . .6.7.2 Alternative Formulations . . . . . . . . . . . .6.7.3 Eigenvalue Estimates from the CG Coefficients6.8 The Conjugate Residual Method . . . . . . . . . . . .6.9 GCR, ORTHOMIN, and ORTHODIR . . . . . . . . . .6.10 The Faber-Manteuffel Theorem . . . . . . . . . . . . .6.11 Convergence Analysis . . . . . . . . . . . . . . . . . .6.11.1 Real Chebyshev Polynomials . . . . . . . . . .6.11.2 Complex Chebyshev Polynomials . . . . . . .6.11.3 Convergence of the CG Algorithm . . . . . . .6.11.4 Convergence of GMRES . . . . . . . . . . . .6.12 Block Krylov Methods . . . . . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . .7.KRYLOV SUBSPACE METHODS – PART 81183183186188188189193194197202205Lanczos Biorthogonalization . . . . . . . . . . . . . . . . . . . . . . . 205

7.1.1 The Algorithm . . . . . . . . . . . .7.1.2 Practical Implementations . . . . . .7.2 The Lanczos Algorithm for Linear Systems .7.3 The BCG and QMR Algorithms . . . . . . .7.3.1 The Biconjugate Gradient Algorithm7.3.2 Quasi-Minimal Residual Algorithm7.4 Transpose-Free Variants . . . . . . . . . . .7.4.1 Conjugate Gradient Squared . . . .7.4.2 BICGSTAB . . . . . . . . . . . . .7.4.3 Transpose-Free QMR (TFQMR) . .Exercises and Notes . . . . . . . . . . . . . . . .8.METHODS RELATED TO THE NORMAL EQUATIONS8.18.2The Normal Equations . . . . . . . . . . . . .Row Projection Methods . . . . . . . . . . .8.2.1 Gauss-Seidel on the Normal Equations8.2.2 Cimmino’s Method . . . . . . . . . .8.3 Conjugate Gradient and Normal Equations . .8.3.1 CGNR . . . . . . . . . . . . . . . . .8.3.2 CGNE . . . . . . . . . . . . . . . . .8.4 Saddle-Point Problems . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . .9.230.PRECONDITIONED oduction . . . . . . . . . . . . . . . . . . . . . . .Preconditioned Conjugate Gradient . . . . . . . . . . .9.2.1 Preserving Symmetry . . . . . . . . . . . . . .9.2.2 Efficient Implementations . . . . . . . . . . . .9.3 Preconditioned GMRES . . . . . . . . . . . . . . . . .9.3.1 Left-Preconditioned GMRES . . . . . . . . . .9.3.2 Right-Preconditioned GMRES . . . . . . . . .9.3.3 Split Preconditioning . . . . . . . . . . . . . .9.3.4 Comparison of Right and Left Preconditioning .9.4 Flexible Variants . . . . . . . . . . . . . . . . . . . . .9.4.1 Flexible GMRES . . . . . . . . . . . . . . . .9.4.2 DQGMRES . . . . . . . . . . . . . . . . . . .9.5 Preconditioned CG for the Normal Equations . . . . . .9.6 The CGW Algorithm . . . . . . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . .10 PRECONDITIONING TECHNIQUES10.1 Introduction . . . . . . . . . . . . . . .10.2 Jacobi, SOR, and SSOR Preconditioners10.3 ILU Factorization Preconditioners . . .10.3.1 Incomplete LU Factorizations . .10.3.2 Zero Fill-in ILU (ILU(0)) . . . 251253254255256256259260261263265.265266269270275

10.3.3 Level of Fill and ILU( ) . . . . . . . . . . . .10.3.4 Matrices with Regular Structure . . . . . . .10.3.5 Modified ILU (MILU) . . . . . . . . . . . .10.4 Threshold Strategies and ILUT . . . . . . . . . . . .10.4.1 The ILUT Approach . . . . . . . . . . . . .10.4.2 Analysis . . . . . . . . . . . . . . . . . . . .10.4.3 Implementation Details . . . . . . . . . . . .10.4.4 The ILUTP Approach . . . . . . . . . . . . .10.4.5 The ILUS Approach . . . . . . . . . . . . . .10.5 Approximate Inverse Preconditioners . . . . . . . . .10.5.1 Approximating the Inverse of a Sparse Matrix10.5.2 Global Iteration . . . . . . . . . . . . . . . .10.5.3 Column-Oriented Algorithms . . . . . . . . .10.5.4 Theoretical Considerations . . . . . . . . . .10.5.5 Convergence of Self Preconditioned MR . . .10.5.6 Factored Approximate Inverses . . . . . . . .10.5.7 Improving a Preconditioner . . . . . . . . . .10.6 Block Preconditioners . . . . . . . . . . . . . . . . .10.6.1 Block-Tridiagonal Matrices . . . . . . . . . .10.6.2 General Matrices . . . . . . . . . . . . . . .10.7 Preconditioners for the Normal Equations . . . . . .10.7.1 Jacobi, SOR, and Variants . . . . . . . . . . .10.7.2 IC(0) for the Normal Equations . . . . . . . .10.7.3 Incomplete Gram-Schmidt and ILQ . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . .11 PARALLEL IMPLEMENTATIONS11.1 Introduction . . . . . . . . . . . . . . . . . . . . .11.2 Forms of Parallelism . . . . . . . . . . . . . . . . .11.2.1 Multiple Functional Units . . . . . . . . . .11.2.2 Pipelining . . . . . . . . . . . . . . . . . .11.2.3 Vector Processors . . . . . . . . . . . . . .11.2.4 Multiprocessing and Distributed Computing11.3 Types of Parallel Architectures . . . . . . . . . . .11.3.1 Shared Memory Computers . . . . . . . . .11.3.2 Distributed Memory Architectures . . . . .11.4 Types of Operations . . . . . . . . . . . . . . . . .11.4.1 Preconditioned CG . . . . . . . . . . . . .11.4.2 GMRES . . . . . . . . . . . . . . . . . . .11.4.3 Vector Operations . . . . . . . . . . . . . .11.4.4 Reverse Communication . . . . . . . . . .11.5 Matrix-by-Vector Products . . . . . . . . . . . . .11.5.1 The Case of Dense Matrices . . . . . . . .11.5.2 The CSR and CSC Formats . . . . . . . . .11.5.3 Matvecs in the Diagonal Format . . . . . .11.5.4 The Ellpack-Itpack Format . . . . . . . . 7327329331332332333334335335336339340

11.5.5 The Jagged Diagonal Format . . . . . . . . . .11.5.6 The Case of Distributed Sparse Matrices . . . .11.6 Standard Preconditioning Operations . . . . . . . . . .11.6.1 Parallelism in Forward Sweeps . . . . . . . . .11.6.2 Level Scheduling: the Case of 5-Point Matrices11.6.3 Level Scheduling for Irregular Graphs . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . .12 PARALLEL PRECONDITIONERS12.1 Introduction . . . . . . . . . . . . . . . . . . . .12.2 Block-Jacobi Preconditioners . . . . . . . . . . .12.3 Polynomial Preconditioners . . . . . . . . . . . .12.3.1 Neumann Polynomials . . . . . . . . . .12.3.2 Chebyshev Polynomials . . . . . . . . . .12.3.3 Least-Squares Polynomials . . . . . . . .12.3.4 The Nonsymmetric Case . . . . . . . . .12.4 Multicoloring . . . . . . . . . . . . . . . . . . .12.4.1 Red-Black Ordering . . . . . . . . . . . .12.4.2 Solution of Red-Black Systems . . . . . .12.4.3 Multicoloring for General Sparse Matrices12.5 Multi-Elimination ILU . . . . . . . . . . . . . . .12.5.1 Multi-Elimination . . . . . . . . . . . . .12.5.2 ILUM . . . . . . . . . . . . . . . . . . .12.6 Distributed ILU and SSOR . . . . . . . . . . . .12.6.1 Distributed Sparse Matrices . . . . . . . .12.7 Other Techniques . . . . . . . . . . . . . . . . .12.7.1 Approximate Inverses . . . . . . . . . . .12.7.2 Element-by-Element Techniques . . . . .12.7.3 Parallel Row Projection Preconditioners .Exercises and Notes . . . . . . . . . . . . . . . . . . .341342345346346347350353.13 DOMAIN DECOMPOSITION METHODS13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .13.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . .13.1.2 Types of Partitionings . . . . . . . . . . . . . . . .13.1.3 Types of Techniques . . . . . . . . . . . . . . . . .13.2 Direct Solution and the Schur Complement . . . . . . . . .13.2.1 Block Gaussian Elimination . . . . . . . . . . . .13.2.2 Properties of the Schur Complement . . . . . . . .13.2.3 Schur Complement for Vertex-Based Partitionings .13.2.4 Schur Complement for Finite-Element Partitionings13.3 Schwarz Alternating Procedures . . . . . . . . . . . . . . .13.3.1 Multiplicative Schwarz Procedure . . . . . . . . .13.3.2 Multiplicative Schwarz Preconditioning . . . . . .13.3.3 Additive Schwarz Procedure . . . . . . . . . . . .13.3.4 Convergence . . . . . . . . . . . . . . . . . . . . 5400402404

13.4 Schur Complement Approaches . . . . . . . . . . . . . . .13.4.1 Induced Preconditioners . . . . . . . . . . . . . . .13.4.2 Probing . . . . . . . . . . . . . . . . . . . . . . .13.4.3 Preconditioning Vertex-Based Schur Complements13.5 Full Matrix Methods . . . . . . . . . . . . . . . . . . . . .13.6 Graph Partitioning . . . . . . . . . . . . . . . . . . . . . .13.6.1 Basic Definitions . . . . . . . . . . . . . . . . . .13.6.2 Geometric Approach . . . . . . . . . . . . . . . .13.6.3 Spectral Techniques . . . . . . . . . . . . . . . . .13.6.4 Graph Theory Techniques . . . . . . . . . . . . . .Exercises and Notes . . . . . . . . . . . . . . . . . . . . . . . . EX439

xii

Iterative methods for solving general, large sparse linear systems have been gainingpopularity in many areas of scientific computing. Until recently, direct solution methodswere often preferred to iterative methods in real applications because of their robustnessand predictable behavior. However, a number of efficient iterative solvers were discoveredand the increased need for solving very large linear systems triggered a noticeable andrapid shift toward iterative techniques in many applications.This trend can be traced back to the 1960s and 1970s when two important developments revolutionized solution methods for large linear systems. First was the realizationthat one can take advantage of “sparsity” to design special direct methods that can bequite economical. Initiated by electrical engineers, these “direct sparse solution methods”led to the development of reliable and efficient general-purpose direct solution softwarecodes over the next three decades. Second was the emergence of preconditioned conjugategradient-like methods for solving linear systems. It was found that the combination of preconditioning and Krylov subspace iterations could provide efficient and simple “generalpurpose” procedures that could compete with direct solvers. Preconditioning involves exploiting ideas from sparse direct solvers. Gradually, iterative methods started to approachthe quality of direct solvers. In earlier times, iterative methods were often special-purposein nature. They were developed with certain applications in mind, and their efficiency reliedon many problem-dependent parameters.Now, three-dimensional models are commonplace and iterative methods are almost mandatory. The memory and the computational requirements for solving threedimensional Partial Differential Equations, or two-dimensional ones involving manydegrees of freedom per point, may seriously challenge the most efficient direct solversavailable today. Also, iterative methods are gaining ground because they are easier toimplement efficiently on high-performance computers than direct methods.My intention in writing this volume is to provide up-to-date coverage of iterative methods for solving large sparse linear systems. I focused the book on practical methods thatwork for general sparse matrices rather than for any specific class of problems. It is indeedbecoming important to embrace applications not necessarily governed by Partial Differential Equations, as these applications are on the rise. Apart from two recent volumes byAxelsson [15] and Hackbusch [116], few books on iterative methods have appeared sincethe excellent ones by Varga [213]. and later Young [232]. Since then, researchers and practitioners have achieved remarkable progress in the development and use of effective iterative methods. Unfortunately, fewer elegant results have been discovered since the 1950sand 1960s. The field has moved in other directions. Methods have gained not only in efficiency but also in robustness and in generality. The traditional techniques which required

rather complicated procedures to determine optimal acceleration parameters have yieldedto the parameter-free conjugate gradient class of methods.The primary aim of this book is to describe some of the best techniques available today,from both preconditioners and accelerators. One of the aims of the book is to provide agood mix of theory and practice. It also addresses some of the current research issuessuch as parallel implementations and robust preconditioners. The emphasis is on Krylovsubspace methods, currently the most practical and common group of techniques used inapplications. Although there is a tutorial chapter that covers the discretization of PartialDifferential Equations, the book is not biased toward any specific application area. Instead,the matrices are assumed to be general sparse, possibly irregularly structured.The book has been structured in four distinct parts. The first part, Chapters 1 to 4,presents the basic tools. The second part, Chapters 5 to 8, presents projection methods andKrylov subspace techniques. The third part, Chapters 9 and 10, discusses preconditioning. The fourth part, Chapters 11 to 13, discusses parallel implementations and parallelalgorithms. I am grateful to a number of colleagues who proofread or reviewed different versions ofthe manuscript. Among them are Randy Bramley (University of Indiana at Bloomingtin),Xiao-Chuan Cai (University of Colorado at Boulder), Tony Chan (University of Californiaat Los Angeles), Jane Cullum (IBM, Yorktown Heights), Alan Edelman (MassachussettInstitute of Technology), Paul Fischer (Brown University), David Keyes (Old DominionUniversity), Beresford Parlett (University of California at Berkeley) and Shang-Hua Teng(University of Minnesota). Their numerous comments, corrections, and encouragementswere a highly appreciated contribution. In particular, they helped improve the presentation considerably and prompted the addition of a number of topics missing from earlierversions.This book evolved from several successive improvements of a set of lecture notes forthe course “Iterative Methods for Linear Systems” which I taught at the University of Minnesota in the last few years. I apologize to those students who used the earlier error-ladenand incomplete manuscripts. Their input and criticism contributed significantly to improving the manuscript. I also wish to thank those students at MIT (with Alan Edelman) andUCLA (with Tony Chan) who used this book in manuscript form and provided helpfulfeedback. My colleagues at the university of Minnesota, staff and faculty members, havehelped in different ways. I wish to thank in particular Ahmed Sameh for his encouragements and for fostering a productive environment in the department. Finally, I am gratefulto the National Science Foundation for their continued financial support of my research,part of which is represented in this work.Yousef Saad

This book can be used as a text to teach a graduate-level course on iterative methods forlinear systems. Selecting topics to teach depends on whether the course is taught in amathematics department or a computer science (or engineering) department, and whetherthe course is over a semester or a quarter. Here are a few comments on the relevance of thetopics in each chapter.For a graduate course in a mathematics department, much of the material in Chapter 1should be known already. For non-mathematics majors most of the chapter must be coveredor reviewed to acquire a good background for later chapters. The important topics forthe rest of the book are in Sections: 1.8.1, 1.8.3, 1.8.4, 1.9, 1.11. Section 1.12 is besttreated at the beginning of Chapter 5. Chapter 2 is essentially independent from the restand could be skipped altogether in a quarter course. One lecture on finite differences andthe resulting matrices would be enough for a non-math course. Chapter 3 should makethe student familiar with some implementation issues associated with iterative solutionprocedures for general sparse matrices. In a computer science or engineering department,this can be very relevant. For mathematicians, a mention of the graph theory aspects ofsparse matrices and a few storage schemes may be sufficient. Most students at this levelshould be familiar with a few of the elementary relaxation techniques covered in Chapter4. The convergence theory can be skipped for non-math majors. These methods are nowoften used as preconditioners and this may be the only motive for covering them.Chapter 5 introduces key concepts and presents projection techniques in general terms.Non-mathematicians may wish to skip Section 5.2.3. Otherwise, it is recommended tostart the theory section by going back to Section 1.12 on general definitions on projectors.Chapters 6 and 7 represent the heart of the matter. It is recommended to describe the firstalgorithms carefully and put emphasis on the fact that they generalize the one-dimensionalmethods covered in Chapter 5. It is also important to stress the optimality properties ofthose methods in Chapter 6 and the fact that these follow immediately from the propertiesof projectors seen in Section 1.12. When covering the algorithms in Chapter 7, it is crucialto point out the main differences between them and those seen in Chapter 6. The variantssuch as CGS, BICGSTAB, and TFQMR can be covered in a short time, omitting details ofthe algebraic derivations or covering only one of the three. The class of methods based onthe normal equation approach, i.e., Chapter 8, can be skipped in a math-oriented course,especially in the case of a quarter system. For a semester course, selected topics may beSections 8.1, 8.2, and 8.4.Currently, preconditioning is known to be the critical ingredient in the success of iterative methods in solving real-life problems. Therefore, at least some parts of Chapter 9and Chapter 10 should be covered. Section 9.2 and (very briefly) 9.3 are recommended.From Chapter 10, discuss the basic ideas in Sections 10.1 through 10.3. The rest could beskipped in a quarter course.Chapter 11 may be useful to present to computer science majors, but may be skimmedor skipped in a mathematics or an engineering course. Parts of Chapter 12 could be taughtprimarily to make the students aware of the importance of “alternative” preconditioners.Suggested selections are: 12.2, 12.4, and 12.7.2 (for engineers). Chapter 13 presents an im-

portant research area and is primilarily geared to mathematics majors. Computer scientistsor engineers may prefer to cover this material in less detail.To make these suggestions more specific, the following two tables are offered as sample course outlines. Numbers refer to sections in the text. A semester course representsapproximately 30 lectures of 75 minutes each whereas a quarter course is approximately20 lectures of 75 minutes each. Different topics are selected for a mathematics course anda non-mathematics course.Weeks1–34–67–910 – 1213 – 15Weeks1–23–45–67–89 – 10Semester courseMathematicsComputer Science / Eng.1.9 –1.131.1 – 1.6 (Read)2.1 – 2.51.7 – 1.13, 2.1 – 2.23.1 – 3.3, 3.73.1 – 3.74.1 – 4.34.1 – 4.25. 1 – 5.45.1 – 5.2.16.1 – 6.36.1 – 6.36.4 – 6.7 (Except 6.5.2)6.4 – 6.5 (Except 6.5.5)6.9 – 6.116.7.1, 6.8–6.9, 6.11.3.7.1 – 7.37.1 – 7.37.4.1; 7.4.2 – 7.4.3 (Read) 7.4.1; 7.4.2 – 7.4.3 (Read)8.1, 8.2, 8.4; 9.1 – 9.38.1 – 8.3; 9.1 – 9.310.1 – 10.310.1 – 10.410.5.1 – 10.5.610.5.1 – 10.5.410.6 ; 12.2 – 12.411.1 – 11.4 (Read); 11.5 – 11.613.1 – 13.612.1 – 12.2; 12.4 – 12.7Quarter courseMathematicsComputer Science / Eng.1.9 – 1.13, 3.1 – 3.21.1 – 1.6 (Read); 3.1 – 3.74.1 – 4.34.15.1 – 5.45.1 – 5.2.16.1 – 6.46.1 – 6.36.4 – 6.7 (Except 6.5.2)6.4 – 6.5 (Except 6.5.5)6.11, 7.1 – 7.36.7.1, 6.11.3, 7.1 – 7.37.4.1; 7.4.2 – 7.4.3 (Read) 7.4.1; 7.4.2 – 7.4.3 (Read)9.1 – 9.3; 10.1 – 10.39.1 – 9.3; 10.1 – 10.310.6 ; 12.2 – 12.411.1 – 11.4 (Read); 11.5 – 11.613.1 – 13.412.1 – 12.2; 12.4 – 12.7

15 ? !&8#" D% CE&( % 6')4 C!* 9( , F-G )-/9 E"H. * &5 &0* ! 213#"% . &I45 JL"6 K "M 7BA&8 6 ')9 6 0* -E ) N-#1 ": 6" .; ?- G #"&8 6 * 6 01O-?9 . 0- 4P B &@ E 2 ! R9 'A Q5B #&8 S" B &T.! U?.[]A" J 6.) &0 :VD ).;- -#WX* F&I-#' " &I- .) WY ,C ! - , !9"6VE 2 Z . 9 4 Q5 "H -#&8 E"H" &0* VX ,-EQ5. 6"%" # E"2.) .[WA-\ CE&8 %aY W\C 0&8"6 ! L&8.[ ,CE' ';.! E.).[W?C#"N9 6" * 9. Q54L )] -#-EV/.#1L. 4 9 " WA 'A , 0.[-b- Q5 E#""6H Q5 &8 E " H -# !"%9[ )W7- ! 9 VE 2 E L f; ! - !WZ9 Vb 0-b4T.!9 & 0- "6 &c % ! 9 'A ) - B !&89 [VEJ % 6 d b&8 W[ # * "2 7.;- B ! 9 V#- e'A[a Y C &8 E " W .;"H-E.# 1g ! &8- W WMQ5B # 2" &T , 6&q h9 # U &i # 2"2 0-E .;'j-#4kSr"r&8.[V!QlA " Q5 6"6W[ E .) H WA&8 6J "2 m E " %.; :- . .[4 - Y &0#""2H L )9! n(&8 N op-E .#&q1s -#"2BA 2 !S 915 2 .; h Qt "H .! 0 ,-E.[' 9 #9J * H ( QL 0 Q .!Q5 .&8 :) #"%&0&I"H .; )&TB! -# !CE"g9 kBA"( . 6 * .; ) -#CE&q* H W @&I'A. 0-L 4u- " " : g g" & &88 .[- !A Wt#" %&I"H " .# &; 1g1 x F )&89 WN)- WPBA & tI&8.6 vwaY C 29 #4"2 0&8k L.[C!-#9AW[Sw 0 "r-5opV! &8 5 "% !-#Q5"gB! 69 ! " 6- E ).) 09-EWAV#' S" H.; ,Q5 B .!" .; ] J.!&0V,4q.)&y" )9 ' .!&T " !QL L )- WbW! 6fz- 0-E',"6 N-E. "% #"2 .;- CE % W " !&I.;CE'[ E.;C#" {} E{ D R F; D @ [ d } [ E For the sake of generality, all vector spaces considered in this chapter are complex, unlessotherwise stated. A complexmatrix is anarray of complex numbers z D 0 0 0 @ [

Iterative methods for solving general, large sparse linear systems have been gaining popularity in many areas of scientific computing. Until recently, direct solution methods were often preferred to iterative methods in real applications because of their robustness and predictable behavior.Cited by: 18757Publish Year: 2003Author: Y. SaadExplore furtherIterative Methods for Sparse Linear Systems Society for .epubs.siam.org9. Preconditioned Iterations Iterative Methods for 道 - Baiduzhidao.baidu.comMIL-STD-453 C INSPECTION RADIOGRAPHICeveryspec.comASTM-E1742 Standard Practice for Radiographic .www.document-center.comRecommended to you based on what's popular Feedback

Related Documents:

methods will always outperform iterative methods for sparse systems due to convergence uncertainty of iterative methods. However, as the size of systems under consideration have in-creased, iterative solvers have become more competitive due to the poor scalability of direct methods. Even the best sparse di-rect solvers require roughly O n1.4

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares CHRISTOPHER C. PAIGE McGill University, Canada and MICHAEL A. SAUNDERS Stanford University An iterative method is given for solving Ax ffi b and minU Ax - b 112, where the matrix A is large and sparse.

ing sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented. KEYWORDS iterative methods, Monte Carlo methods, preconditioning, resilience, Richardson iteration, sparse approximate inverses, sparse linear systems 1 INTRODUCTION

Methods for large and sparse systems Rank-one updating with Sherman-Morrison Iterative refinement Fixed-point and stationary methods – Introduction – Iterative refinement as a stationary method – Gauss-Seidel and Jacobi methods – Successive over-relaxation (SOR)

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

Iterative Methods for Solving Linear Systems Iterative methods formally yield the solution x of a linear system after an infinite number of steps. At each step they require the computation of the . In the case of large sparse matrices, as discussed in Section 3.9, direct

3 Strategiesof solving sparse linear systems §Iterative methods: (e.g., Krylov, multigrid, ) §A is not changed (read-only) §Key kernel: sparse matrix-vector multiply Easier to optimize and parallelize §Low algorithmic complexity, but may not converge §Direct methods: §A is modified (factorized) : A L*U Harder to optimize and parallelize §Numerically robust, but higher .

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

3. A bilevel iterative scheme. Given a general sparse linear system (3.1) Ax b; where A2R n is nonsingular. As discussed in the introduction, many preconditioners are either applied or aim for a symmetric or symmetric positive definite system, since for these we have short recurrences in Krylov subspace methods like the conjugate gradient .

sparse linear systems. Given that linear systems are widespread [75, 74] across di erent areas of research, providing accurate and e cient solution methods plays a critical role for scientists in these elds. There are two classes of linear systems: sparse and dense. The systems in which most of the elements are zero are known as sparse systems.

2.1 Solving Systems of Linear Equations Large sparse systems of linear equations of the form Ax b often arise in science and engineering problems. An example is the mathematical modelling of physical systems, such as climate modelling, over discretized do-mains. The numerical solution methods for linear sys-

Linear Systems of Equations: Iterative Methods. x x x x x x x x x x. 0 0 0 0 0 0. Sparse (large) Full-bandwidth Systems (frequent in practice) 0 0 0 0 0 0 0 0 0. Example of Iteration equation Analogous to iterative methods obtained for roots of equations, i.e. Open Methods: Fixed-poin

Solving sparse triangular systems of linear equations is a performance bottle-neck in many methods for solving more general sparse systems. Both for direct methods and for many iterative preconditioners, it is used to solve the system or improve an approximate solution, often across many iterations. Solving tri-

a key cost, and thereby a system performance bottleneck in many large SpMV computations. C. TAMU Sparse Matrix Collection The TAMU Sparse Matrix Suite Collection [5], is the largest, and the most diverse representation suite of sparse matrices available. It is an actively growing set of sparse matrices that arise in real applications.

approach based on a sparse mixture of sparse Gaussian graphical models (GGMs). Unlike existing fused- and group-lasso-based approaches, each task is represented by a sparse mixture of sparse GGMs, and can handle multi-modalities. We develop a variational inference algorithm combined with a novel sparse mixture weight selection algorithm. To handle

Iterative Methods (brie y) Why iterative methods? Direct solvers are great for dense matrices and can be made to avoid roundo errors to a large degree. They can also be implemented very well on modern machines. Fill-in is a major problem for certain sparse matrices and leads to extreme memory requirements (e.g., three-d.

The Four Color Personalities For MLM: The Secret Language For Network Marketing By Tom "Big Al" Schreiter, Page: Intro & Details Instant bonding, instant communication, and how to get your network marketing prospects to fully understand and act on your message fun! This is the most fun of the 25 skills of network marketing. Our prospects have a different point-of-view than ours. So how do we .