A Chain Method For Preconditioned Iterative Linear Solvers .

2y ago
15 Views
3 Downloads
1.20 MB
8 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Milo Davies
Transcription

166IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 33, NO. 1, JANUARY 2018A Chain Method for Preconditioned Iterative LinearSolvers for Power System MatricesLisa L. Grant, Student Member, IEEE, Mariesa L. CrowAbstract—Many power systems applications such as power flowand short-circuit analysis require very large sparse matrix computations. With the increase in reliance on our electric infrastructure,power systems are continually growing in size, creating greatercomputational complexity in solving these large linear systemswithin reasonable time. For sparse matrix applications, it is desirable to have an algorithm with low runtime complexity in termsof the number of nonzeros in the matrix. There have been severalrecent advances in computational methods in other fields that, ifapplied to power system, could make real-time dynamic simulation a reality. Much work has been done for specific types of theseproblems where the system is symmetric and diagonally dominant,similar to the form of power system matrices. This paper detailsan expansion on the current work in fast linear solvers to developpower system specific methods that show potential for accurate solutions in O(m log 2 n) run times, where n represents the numberof nodes and m represents the number of nonzeros in the powersystem matrix. This paper presents the simulation validation of arecently developed recursively solved iterative chain method forsparse matrices using a low stretch spanning tree preconditioner.Index Terms—Graph sparsification, linear system solution, matrix preconditioning, power system simulation.I. INTRODUCTIONHERE are several power system simulations that requirethe solution of a large number of differential algebraicequations in real-time. In the majority of power system computational problems, the system of equations to be solved typically depends on a (possibly time-varying) matrix solution. Thesystem matrix A in question is usually symmetric and sparse,but with randomly occurring off-diagonal non-zero elements.The most computationally intensive steps in solving numericalpower system problems is the solution of the system of linearequations. The majority of currently used power system linearsolvers rely on direct methods derived from LU factorization[1]. Most recent approaches propose methods for node ordering schemes to reduce fills, elimination trees, assembly trees,and directed acyclic graphs as developments for more efficientTManuscript received May 31, 2016; revised November 9, 2016, January 31,2017, and March 16, 2017; accepted April 12, 2017. Date of publication April19, 2017; date of current version December 20, 2017. This work was supportedin part by the National Science Foundation under Award EECS 1307458 andin part by the Chancellor’s Fellowship of Missouri S&T. Paper no. TPWRS00827-2016. (Corresponding author: Mariesa L. Crow.)L. L. Grant and M. L. Crow are with the Department of Electrical andComputer Engineering, Missouri University of Science and Technology, Rolla,MO 65401 USA (e-mail: llsnpc@mst.edu; crow@mst.edu).M. X. Cheng is the with the School of Management, New Jersey Institute ofTechnology, Newark, NJ 07102 USA (e-mail: maggie.cheng@njit.edu).Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TPWRS.2017.2695962, Fellow, IEEE, and Maggie X. Cheng, Member, IEEELU factorization. It has been conventional wisdom that directmethods will always outperform iterative methods for sparsesystems due to convergence uncertainty of iterative methods.However, as the size of systems under consideration have increased, iterative solvers have become more competitive due tothe poor scalability of direct methods. Even the best sparse di rect solvers require roughly O n1.4 floating point operations(where n is the size of the matrix) [2]. Recently, power systemsresearchers have begun to investigate iterative solution methodsas advances in preconditioners have begun to produce competitive computational runtimes and accuracy levels [3]. Chen et al.demonstrated the use of preconditioned iterative methods on DCand transient simulations for large scale power delivery circuitswith favorable results using conjugate gradient algorithm [4].A study was also performed to demonstrate the use of iterativetechniques on the Nigerian 330 kV grid [5].For sparse matrix applications, it is desirable to have an algorithm with a run-time that is efficient in terms of the numberof non-zero variables in the matrix. The density of matrix Adetermines how quickly these equations can be solved. A sparsematrix that is ordered and conditioned in a way that improvesits spectral properties, can allow for a simple iterative methodto efficiently solve even large systems in a short amount of time[6]. Practical implementation of iterative methods require thatthe system matrix A be preconditioned by a non-singular preconditioner matrix M that closely approximates the inverse ofthe system matrix. The preconditioner matrix M should requirelittle computational cost to compute and update and provide acomputationally efficient solution to M y b where y is a nearsolution to x in Ax b. Effective preconditioners arise fromincomplete LU factorization, a sparse approximate inverse toA, and stationary methods among others. Recent advances haveproposed an approach that exploits underlying structural properties to produce a chain of graphs which is used as an inputto a recursive preconditioned Chebyshev iteration. This methodhas been shown to solve symmetric, diagonally dominant (SDD)systems in nearly-linear time [7]. Power system matrices closelymimic SDD matrices and are sparse with randomly occurringoff-diagonal non-zero elements [8], [9]. Thus it is desirable toapply these new advances in nearly-linear solvers to the fieldof electric power systems. In this paper, we extend the developments in [8] and [10] to improve power system computationto nearly-linear run time. Specifically, we propose a computationally efficient preconditioner which reduces the conditionnumber of the system thereby improving convergence and decreasing runtime.0885-8950 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

GRANT et al.: CHAIN METHOD FOR PRECONDITIONED ITERATIVE LINEAR SOLVERS FOR POWER SYSTEM MATRICES167II. BACKGROUNDUndirected graphs GA (V, E, ω) are often used to represent matrices. One proposed approach to developing effectivepreconditioners to GA is to use the Laplacian of a subgraphGM that has substantially fewer edges than GA [9]. Graph theory has a natural approach to reducing the density of A byremoving trivial edges [8], [11]. Recent work has focused onusing spanning trees as preconditioners, where a spanning treeis a subgraph of A and is a connected graph containing all ofthe vertices in A with no cycles [12]–[15]. These approacheshave led to the development of combinatorial preconditioning,where numerical linear algebra is merged with spectral graphtheory [14], [16].One recent approach is to add selected off-tree edges backinto the spanning tree preconditioner using a random samplingscheme [17]. This is based on visualizing the graph as an electrical network where the edges are selected for the preconditionerusing a sampling probability proportional to their effective resistance [18]. The method develops a spectral sparsifier for thesystem composed of the low-stretch spanning tree (LSST) ofthe graph plus a small number of off-tree edges. The LSST is aspanning tree that uses the concept of stretch to determine whichedges are kept in the subgraph. This method achieved one of thefirst nearly-linear time algorithms with a convergence rate ofO(m log15 n) [10].A modification, called incremental graph sparsification, hasbeen proposed, which shows the potential for accurate solutions in O(m log2 n) run times [8]. The method was designedfor imaging applications and contains a combinatorial component that uses graph theoretic algorithms. There are two stepsto the incremental graph sparsification technique developed in[8]. The first step generates a simplified system for use as thepreconditioner for the second step, which performs a recursiveiterative technique to solve the preconditioned linear systemM 1 Ax M 1 b. The proposed approach generates a preconditioner with a provably small condition number between the A preserving theoriginal graph GA and the approximate graph G Aspectral properties of the system. The approximate graph Gcontains a nearly-linear number of edges that α-approximatesthe given graph for a constant α. The condition number, givenby (1), provides a measure of sensitivity to numerical operations and can be used to predict the convergence rate of iterativemethods.κ(M 1 A) λm ax (M 1 A)λm in (M 1 A)(1)A small matrix example will be used throughout the paper tobetter convey the graphical interpretations of the proposed approach. An 8 8 matrix with its corresponding graph is shownin Fig. 1. This graph has 8 nodes or vertices V denoted bythe variable n, and 12 weighted edges E denoted by m. Forsimplicity in notation, both the matrix representing the powersystem and its corresponding graph will be referred to as A andthe matrix preconditioner along its corresponding graph will bereferred to as M .Fig. 1.Graph of matrix A with corresponding subgraph M .The method explored in this project for fast linear solvers isbased on [8] and can be viewed in three components describedin detail in the following subsections:A) Developing the low stretch spanning tree.B) Building a chain of sparse preconditioners.C) Implementing the actual linear solver.A. Low Stretch Spanning Tree (LSST)The preconditioner M for the linear system is based on thelow-stretch spanning tree of the matrix. To find the LSST, thesystem matrix A is represented by a graph with m edges, whichalso correspond to the non-zero entries in the matrix. The sizeof the tree stretch is critical for constructing a good spanningtree preconditioner [12]. To have a spanning tree that servesas an effective preconditioner, the off-tree edges must have anaverage stretch δ over the spanning tree for the spanning treeto be an O(δm)-approximation of the graph [8]. There existsa unique ‘detour’ path in the tree between vertices u and v forevery edge e(u ,v ) . Stretch is defined as the distortion caused bythe detour required by taking the tree path, where the stretch ofthe edges in the tree is given by: kw (ei )(2)stretchT (e) i 1 w (e)The denominator of (2) contains the distance between the vertices (u, v) in the graph and the numerator sums the distances ofthe edges along the unique path in the tree connecting vertices(u, v), with the distance equaling the inverse of the edge weight,w 1/e. If the stretch of an edge is small, then there are a significant number of alternative connections between its vertices;therefore, stretch is a measure of the importance of an edge inthe graph network. A low-stretch spanning tree is bounded by atotal stretch of O(m log n).The LSST is generated using the star-decomposition approach[20]. Star-decomposition recursively performs a graph decomposition to partition the vertices of the graph into clusters, whichare connected in a star structure as shown in Fig. 2. A partition of the vertices V is a set of pairwise disjoint subsets{V1 , V2 , . . . , Vk } of the vertices such that their union is V [20].The vertices of the graph are partitioned into sets by first defininga central ball V0 . Cones V1 , . . . , Vk are then grown sequentiallyfrom the nodes remaining in the graph. The ball, which is grownaround the central vertex x0 , is connected to each outer cone setby a single bridge edge (xk , yk ). Once all the nodes in the graphare assigned to a set in the star-decomposition, the algorithm isrepeated recursively. A new star decomposition is performed on

168IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 33, NO. 1, JANUARY 2018Fig. 2.Star-decomposition [20].Fig. 3.Example of the low stretch spanning tree for an 8 8 matrix.each individual set V0 through Vk . The recursion is completewhen the size of each set is reduced to at most 2 vertices. Aspanning tree is formed containing all of the bridge edges thatconnect the sets. A trade-off exists between metrics, such as theradius of the graph and number of edges cut, in determiningthe size of each set to ensure a complete cover of the graph.The low-stretch spanning tree for the 8 8 example matrix isshown in Fig. 3 with the tree edges highlighted by the dashedgreen lines. Edges in the graph that do not belong to the tree arereferred to as off-tree edges. To compute the stretch for an offtree edge e(4,6) , the distances of edges connecting nodes 4 and6 along the tree are summed and then divided by the distance ofthe edge directly between nodes 4 and 6 as:stretchT (e(4,6) ) w (e3,4 ) w (e2,3 ) w (e2,6 )w (e4,6 )1/6 1/5 1/6 1.0671/2B. Chain of PreconditionersIn the preconditioner phase, a chain of progressively smallergraphs C {A1 , M1 , A2 , M2 , . . . , Ad } is created based on thelow-stretch spanning tree of the matrix. The chain buildingphase alternates between a sparsification routine to form thepreconditioners Mi from the matrices Ai , followed by a greedyelimination step to create smaller subgraphs Ai 1 from the preconditioners Mi [8], [9].1) Incremental Sparsify: The preconditioner Mi needs to bea good approximation to Ai and able to be computed in lineartime. The sparsification portion of the algorithm, depicted inFig. 4, generates less dense subgraphs by adding a subset ofoff-tree edges from Ai back to the LSST in order to form thepreconditioner Mi . In sparsification, the number of edges in thegraph A needs to be significantly reduced while preserving agood approximation to the original graph. Adding the ‘optimal’off-tree edges back to the LSST produces a graph with a better condition number than Ai . To achieve this, the incremental i from Ai .sparsification algorithm generates a special graph AThe weights of all tree edges are scaled up by a factor of κ,creating a bound on the condition number of the augmentedpreconditioned matrix Mi 1 Ai . The scaled version of the tree i , where the edgesTκ is added back into the graph, generating Ain the tree path are now heavier by a factor of κ. The weights i remain unchanged. The stretch ofof the off-tree edges in A the edges in Ai is now decreased by a factor of κ and the totalstretch of these edges becomes t O(m log n/κ). i ,Over-sampling is performed on the off-tree edges of Awhere each off-tree edge has a probability of being chosen proportional to its scaled stretch value. The off-tree edges are sampled in a way that adds the most important off-tree edges backinto the low-stretch tree preconditioner, based on upper boundson stretch. The sampling scheme assigns each off-tree edge aninterval on the unit interval [0, 1], with interval length correvaluessponding to its probability pe [8]. A total of O(t log n) are sampled at random from the interval [0, 1], where t e peindicates the total stretch. A binary search is performed to findthe edge interval corresponding to each random value. The chosen sampled off-tree edges are added to the tree T to form thepreconditioner Mi . If an off-tree edge is sampled multiple times,it is added in parallel and the weight is scaled accordingly. InFig. 4, the off-tree edges e(1,2) , e(1,3) , and e(5,6) are added to thetree forming preconditioner Mi with new scaled weight values.The density of the graph is reduced not only by removing edgese(3,5) and e(4,6) , but also by the overall reduction in edge weight.This completes the first level i 1 of the chain by producingA1 and M1 . The next chain level i 1 2 will be initiated byperforming greedy elimination on the preconditioner.2) Greedy Elimination: The incremental sparsifier Mi is anatural choice for preconditioner, and reducing the size of Miusing greedy elimination produces the matrix Ai 1 for the nextchain level [8]. To reduce the matrix size using greedy elimination, all nodes in Mi with degree 1 are removed from thegraph, and any remaining degree 2 nodes are replaced by a newsingle edge of combined weight w . The matrix reduction canbe achieved by applying a partial Cholesky factorization of thefirst k variables to the modified matrix Mi . Mi LiI00Ai 1 LT(3)

GRANT et al.: CHAIN METHOD FOR PRECONDITIONED ITERATIVE LINEAR SOLVERS FOR POWER SYSTEM MATRICESFig. 4.Example of Incremental Sparsify algorithm.Fig. 5.Example of Greedy Elimination.The partial Cholesky factorization puts Mi into the form in(3) where I is the k k identity matrix, and Ai 1 is the Schurcomplement of Mi with respect to the elimination of the firstk variables [8]. The new matrix Ai 1 , is a reduced approximation of the original graph Ai . Fig. 5 contains an example ofreducing the density of Ai by applying greedy elimination to itspreconditioner. In the greedy elimination example, the degree 1nodes 4 and 8 are pruned from the preconditioner Mi along withtheir connecting edge. Once all degree 1 nodes are removed, thedegree 2 nodes get absorbed one-by one starting with node 3where edges e(1,3) and e(2,3) are combined with their sharedmutual edge e(1,2) . Node 7 is the last remaining degree 2 node,which is also removed by combining its corresponding edgese(5,7) and e(6,7) with the shared edge e(5,6) . The remaining graphAi 1 has been reduced from size 8 8 to size 4 4 and willbe used as a starting point for chain level i 2. The process ofgenerating smaller matrices and their preconditioners using incremental sparsification and greedy elimination is repeated untilthe matrix size is reduced to a desired point by certain stoppingcriteria. From both Figs. 4 and 5 it can be observed that the majority of the original low stretch spanning tree edges, depictedby the dashed green lines, are retained as the matrix is reduced,while the off-tree edges are systematically removed by reducingtheir edge weights. Fig. 7 shows a complete cycle of the chainbuilding process used to produce the reduced matrices Ai andtheir preconditioners Mi for the example 8 8 system. For thisillustration, the greedy elimination step has been modified toonly remove a few nodes each cycle since this is an abnormallysmall matrix, instead of eliminating all degree 1 and degree 2nodes at once.Fig. 6.169PCG iterative method.C. Linear SolverIn the solve phase, a recursive iterative method is applied toeach level i in the chain of preconditioners. Iterative methodscan be terminated as soon as they reach a reasonable solution,making them more scalable by requiring less computation thandirect methods [17]. Recursion is the key to solving preconditioned linear systems quickly, since the preconditioner Mi canbe solved approximately. The preconditioned iterative methodsolves the system M 1 Ax M 1 b. The matrices Mi serve aspreconditioners for matrices Ai 1 in the recursive solver [29].Instead of solving systems in Mi directly, the systems are reduced in Ai and are solved recursively. A product of the formM 1 z is equivalent to solving the system M y c [8]. Thefunction fM i (z) produces approximate solutions to the systemsin Mi , returning M 1 c to the iterative method instead of thepreconditioner Mi as input [29]. Details and pseudocode for therecursive solver can be found in [8].

170IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 33, NO. 1, JANUARY 2018TABLE ICONDITION NUMBER REDUCTION AT EACH CHAIN LEVELFig. 7.Chain of preconditioners for simple 8 8 example.The preconditioning chain C reduces the computation neededto solve the linear system. Most of the work in the iterative methods is performed on the chain levels with smaller matrices, levelsi 1 through i d. The recursive method solves the system inM by solving a linear system in A2 [8]. The Chain is evokedto solve systems on A2 by recursively applying the preconditioned iterative method to it using the new preconditioner M2 ,and so forth through the chain of progressively smaller matrices.The iterative method applied in this paper is the PreconditionedChebyshev iteration.1) Preconditioned Conjugate Gradient: The ConjugateGradient algorithm is suited for linear systems with symmetric positive definite and sparse matrices. It is known as one ofthe Krylov subspace methods, which generates a sequence ofconverging solutions, xk , that are an orthogonal projection of bapplied to the Krylov subspace Kn r0 , A , where r0 b Ax0is the initial residual [6]. A Krylov subspace is a linear combination of vectors, Kn b, Ab, ., An 1 b ; the residual is orthogonalto the subspace, along with all previously generated residuals[21], [22]. The solution xk at each step is found using a linearcombination of the previous search directions pk and the current residual. A graphical description of how the approximatesolution moves from xk to xk 1 is provided in Fig. 6 [2].The conjugate relationship among the parameters eliminatesthe amount of vectors needing to be stored in memory, sincethey can be derived from each other [21]. One advantage ofthe PCG method is that only one matrix-vector multiplication isneeded per iteration in the computation of the current residue.The greatest advantage of PCG over other methods is that itconverges in at most n steps. The goal of iterative techniques isto reach the exact solution in the fewest number of iterations fora fast convergence rate.2) Chebyshev Iteration: The Chebyshev iterative method issimilar to the PCG and was developed to avoid the need to compute inner products, which require considerable computationtime for large systems. The trade-off of not needing to generatean inner product is that enough must be known about the systemin order to generate an ellipse enclosing the spectrum of A [23],[24]. A family of ellipses is defined by scalars c and d witha common center d 0 and foci d c and d c. The rate ofconvergence is determined by (4) where a is the length of the xaxis of the ellipse. For symmetric positive definite systems, theellipse enclosing the spectrum of A degenerates to the interval[λm in , λm ax ] along the positive x-axis, where λm in and λm axare the largest and smallest eigenvalues of M 1 A [19], [24].Chebyshev has the same upper bound as PCG for symmetricmatrices when c and d are computed from λm in and λm ax [6]. a a2 c2 r (4)d d2 c2At each chain level i, the Chained Chebyshev method provides a product in the form M 1 z, which is equivalent to solving the system M y c, whereas the traditional PreconditionedChebyshev iteration must explicitly calculate the solution toM y c [8]. The condition number of the Ai matrix can indicate convergence speed for iterative methods like Chebyshev. If the condition number κ(A) is large, then the convergence will be slow. Improvements to the speed can be achievedby proper preconditioning, which reduces the condition num[25]. Chebyshev iteration is guaranteed tober to κ(Mi , Ai ) converge after O( κ log 1/ε) iterations with an accuracy of x A b A ε A b A where the 2-norm condition number of A can be computed by κ κ(A) λm ax /λm in [25].Table I provides an example of the condition number reductionat each chain level for a 117 117 matrix. Note that the condition number of the preconditioned system (Mi 1 Ai ) decreasesrapidly to nearly unity, indicating very rapid convergence forthe smaller levels in the chain.III. THE LSST CHAINED PRECONDITIONERSeveral matrices were used to demonstrate the capabilitiesof this method, including power networks from the Universityof Florida Sparse Matrix Collection and the Matpower library[26], [27].

GRANT et al.: CHAIN METHOD FOR PRECONDITIONED ITERATIVE LINEAR SOLVERS FOR POWER SYSTEM MATRICESTABLE IITEST MATRIX STATISTICS171TABLE IIISIMULATION REDUCTION IN CONDITION NUMBER AND CHAIN SIZEproposed Chain Method uses a series of smaller preconditionersand M 1 A and M 1 b are never explicitly found.A. The Chain Method of PreconditionersFig. 8. Number of iterations required for different preconditioning techniquesusing the PCG method.Statistics for the test matrices are presented in Table II. Thetable includes number of edges in graph A1 , number of nonzerovalues in the system matrix, the condition number κ(A1 ) of thematrix, and the density of the matrix. The density value provides a measure of the sparsity in the matrix and is computed bydividing the number of nonzeros by n2 , which is the total number of elements in the matrix. A very small density percentageindicates a very sparse system with a greater proportion of zerovalues to nonzeros.Fig. 8 shows that the LSST preconditioner reduces the number of iterations required when compared to several popularpreconditioning methods [6]. For this study, the following traditional preconditioners were compared to the method describedin this paper:1) Jacobi M diag(A)2) Tridiagonal M tridiag(ai,i 1 , ai,i , ai,i 1 )3) SSOR M (D L)D 1 (D L)T4) Incomplete LU Factorization (ILU) M LL 5) LSST Preconditioner M M16) Chain Method C {A1 , M1 , A2 , M2 , . . . , Ad }The LSST preconditioner is the level 1 preconditioner matrixM1 derived using the technique described in Section II-B1, butthe solution is derived using the traditional PCG method withoututilizing the chain sparsification.These results were included to show that the Low StretchTree serves as an efficient preconditioner by itself, but as withall preconditioners there is a trade-off between the time required to find the preconditioned matrices M 1 A and M 1 b andthe number of iterations for convergence. For this reason, theFast iterative linear solutions depend on a good choice of preconditioner that significantly reduce the condition number ofthe preconditioned system. It was shown that the LSST methodprovides such a matrix. But this is only one aspect of the iterativesolver. One of the most often cited drawbacks of preconditionediterative linear solvers is that the preconditioned system may notbe significantly easier to solve than the original system. Therefore the proposed method alleviates this issue by introducing amultilevel hierarchy of chained matrices, which are solved recursively. One of the primary attributes of the Chain Method is therecursive solver. This method starts with the smallest matrix inthe chain and uses the solution to predict the solution of the nextlarger matrix. Furthermore at every chain level, rather than solving systems directly, recursion is used to produce approximatesolutions at each level. Therefore, the proposed Chain Methodhas several advantages over other iterative linear solvers. Theadvantages are summarized below:1) The LSST preconditioner is an excellent preconditioner asevidenced by the significant reduction in condition number of the preconditioned system and the sharp decrease innumber of iterations required for convergence (see Fig. 8)2) The preconditioner M is based on the low-stretch spanning tree and therefore captures physical properties of theunderlying system.3) The recursive solver avoids the requirement to explicitlysolve the preconditioned system M 1 A.4) The chained method of solving from small to increasingly larger matrices provides an reasonably accurate initial guess to the iteration, thus decreasing the number ofiterations at each step.IV. RESULTS AND DISCUSSIONThe reduction in condition number achieved at key pointsin the preconditioning chain is presented in Table III, alongwith the number of chain levels generated for each test matrix. The size reduction achieved by greedy elimination of the494 bus system produces 7 chain levels of matrix sizes 494, 162,68, 34, 18, 7, and 3 respectively. The algorithm parameters are

172IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 33, NO. 1, JANUARY 2018TABLE IVSIMULATION RESULTSset to roughly reduce the system size by at least half for eachlevel. The second column of Table III contains the conditionnumber of the original matrix κ(A1 ), the third column displaysthe reduction in condition number achieved by applying the firstpreconditioner to the matrix κ(M1 1 A1 ), and the fourth columncontains the condition number when the system is reduced to thematrix of smallest size at chain level i d. The table indicatesthat the smallest chain level produces a condition number closeto 1 for all matrices, which is optimal for obtaining an accuratesolution very quickly because a condition number close to 1 indicates a well ordered system. Solutions for all matrices at thischain level generally converged with the first few iterations. Thepreconditioner is able to provide a better condition number evenat chain level i 1 by approximately an order of magnitude forthe larger test matrices exhibiting the most sparsity.Table IV contains simulation results including runtime of eachsegment of the algorithm (i.e., the set-up (time to find the LSSTand build the chain of preconditioners) and perform the PCGiteration). All time values are in seconds and the simulation wasrun on a Intel i-7 2.5 GHz processor with 12 GB of memory. Theerror is measured by the 2-norm of the residual r b Ax. Anerror threshold of e 10 5 was used as stopping criteria for theiterative methods. The number of iterations reported in the tableis the iterations required to solve chain level 1, where the matrixis of largest size. There is a finite limit imposed on the numberof iterations allowed for the smaller chain levels, since onlyan approximate solution is required at the smaller matrix chainlevels. Finding the LSST for each matrix consumes the mosttime. Fortunately the LSST would only needed to be computedonce in situations where only small updates are made to thesystem matrix without altering the structure of the matrix suchas during a Newton-Ra

methods will always outperform iterative methods for sparse systems due to convergence uncertainty of iterative methods. However, as the size of systems under consideration have in-creased, iterative solvers have become more competitive due to the poor scalability of direct methods. Even the best sparse di-rect solvers require roughly O n1.4

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

EPA Test Method 1: EPA Test Method 2 EPA Test Method 3A. EPA Test Method 4 . Method 3A Oxygen & Carbon Dioxide . EPA Test Method 3A. Method 6C SO. 2. EPA Test Method 6C . Method 7E NOx . EPA Test Method 7E. Method 10 CO . EPA Test Method 10 . Method 25A Hydrocarbons (THC) EPA Test Method 25A. Method 30B Mercury (sorbent trap) EPA Test Method .

Implicit Finite Volume Schemes and Preconditioned Krylov Subspace Methods for the Discretization of Hyperbolic and Parabolic Conservation Laws Andreas Meister UMBC, Department of Mathematics and Statistics Andreas Meister (UMBC) Finite Volume Scheme 1 / 1

The solution of large sparse linear systems is an important problem in computa-tional mechanics, atmospheric modeling, geophysics, biology, circuit simulation and many other applications in the eld of computational science and engineer-ing. In general, these linear systems can be solved using direct or preconditioned iterative methods.