4m ago

8 Views

0 Downloads

1.46 MB

68 Pages

Transcription

2. Condition numbersThe condition number of an invertible matrix A is defined to beκ(A) kAk kA 1 k.This quantity is always bigger than (or equal to) 1.We must use the same type of norm twice on the right-hand side of the above equation. Sometimesthe notation is adjusted to make it clear which norm is being used, for example if we use the infinitynorm we might writeκ (A) kAk kA 1 k .Example 15Use the norm indicated to calculate the condition number of the given matrices. 2323(a) A ; 1-norm.(b) A ; Euclidean norm.1 11 1 3 0 0(c) B 0 4 0 ; infinity-norm.0 0 2Solution(a) kAk1 max(2 1, 3 1) 4,A 1 )1 2 3 1 3 12 153515 25 kA 1 k1 max( 15 51 , 35 25 ) 1.Therefore κ1 (A) kAk1 kA 1 k1 4 1 4.p (b) kAkE 22 32 12 ( 1)2 15. We can re-use A 1 from above to see thats 2 2 2 r2131 215 1kA kE .555525r 151515 1ThereforeκE (A) kAkE kA kE 15 3.25525(c) kBk max(3, 4, 2) 4. 1 3 0 0B 1 0 14 0 0 0 12so kB 1 k max( 13 , 14 , 12 ) 21 .40Thereforeκ (B) kBk kB 1 k 4 12 2.HELM (2015):Workbook 30: Introduction to Numerical Methods

TaskCalculate the condition numbers of these matrices, using the norm indicated 2 83 6A (1-norm),B (infinity-norm).311 0Your solutionAnswer 11 8104 10, 26 ) 9 26 A so κ1 (A) kAk1 kA 1 k1 max(5, 9) max( 26 322 24 10 6 1B so κ (B) kBk kB 1 k max(9, 1) max(1, 46 ) 9.30 6 1 145.13Condition numbers and conditioningAs the name might suggest, the condition number gives us information regarding how wellconditioned a problem is. Consider this example 4 10x11 104. 1x2 12It is not hard to verify that the exact solution to this problem is 10000 x10.999800. 10002 . x20.999900. 10001 10002Example 16 Using the 1-norm find the condition number of 1 104. 12SolutionFirstly, kAk1 2 104 . Also 12 104 1A )12 104 1HELM (2015):Section 30.4: Matrix NormskA 1 k1 1(1 104 ). Hence κ1 (A) 1 104 10001.42 1041

The fact that this number is large is the indication that the problem involving A is an ill-conditionedone. Suppose we consider finding its solution by Gaussian elimination, using 3 significant figuresthroughout. Eliminating the non-zero in the bottom left corner gives 4 1 104x110 .0 104x2104which implies that x2 1 and x1 0. This is a poor approximation to the true solution and partialpivoting will not help. We have altered the problem by a relatively tiny amount (that is, by neglectingx1the fourth significant figure) and the resulthas changed by a large amount. In other wordsx2the problem is ill-conditioned.One way that systems of equations can be made better conditioned is to fix things so that all the1 104the firstrows have largest elements that are about the same size. In the matrix A 12row’s largest element is 104 , the second row has largest element equal to 2. This is not a happysituation.If we divide the first equation through by 104 then we have 4 101x11 1 2x21then the top row has largest entry equal to 1, and the bottom row still has 2 as its largest entry.These two values are of comparable size.The solution to the system was found via pivoting (using 3 significant figures) in the Section concerning Gaussian elimination to be x1 x2 1, a pretty good approximation to the exact values.The matrix in this second version of the problem is much better conditioned.Example 17 Using the 1-norm find the condition number of 10 4 1. 1 2SolutionThe 1-norm of A is easily seen to be kAk1 3. We also need 132 1 1A )kA 1 k1 . 4 41102 10 12 10 4 1Henceκ1 (A) 9 8.9982 10 4 1This condition number is much smaller than the earlier value of 10001, and this shows us that thesecond version of the system of equations is better conditioned.42HELM (2015):Workbook 30: Introduction to Numerical Methods

Exercises1. Calculate the indicated norm of the following matrices 2 2(a) A ; 1-norm.1 3 2 2(b) A ; infinity-norm.1 3 2 3(c) B ; Euclidean norm.1 2 1 2 35 6 ; infinity-norm.(d) C 12 1 3 1 2 35 6 ; 1-norm.(e) C 12 1 32. Use the norm indicated to calculate the condition number of the given matrices. 4 2(a) D ; 1-norm.60 1 5(b) E ; Euclidean norm.4 2 6 0 0(c) F 0 4 0 ; infinity-norm.0 0 1 133. Why is it not sensible to ask what the condition number ofis?2 6 73 1324 11 4 16 .52 is4. Verify that the inverse of G 25 32 2 1 11Hence find the condition number of G using the 1-norm.5. (a) Calculate the condition number (use any norm you choose) of the coefficient matrix ofthe system 1 104x11 23x23and hence conclude that the problem as stated is ill-conditioned.(b) Multiply one of the equations through by a suitably chosen constant so as to make thesystem better conditioned. Calculate the condition number of the coefficient matrix inyour new system of equations.HELM (2015):Section 30.4: Matrix Norms43

Answers1. (a) kAk1 max(2 1, 2 3) 5.(b) kAk max(2 2, 1 3) 4. (c) kBkE 4 9 1 4 18(d) kCk max(1 2 3, 1 5 6, 2 1 3) 12.(e) kCk1 max(1 1 2, 2 5 1, 3 6 3) 12.2. (a) To work out the condition number we need to find 10 2 1.D 12 6 4Given this we work out the condition number as the product of two norms as followsκ1 (D) kDk1 kD 1 k1 10 12 5.(b) To work out the condition number we need to find 12 5 1.E 22 4 1Given this we work out the condition number as the product of two norms as followsκE (E) kEkE kE 1 kE 6.782330 0.308288 2.090909.to 6 decimal places. 1 006(c) Here F 1 0 14 0 so that κ (F ) kF k kF 1 k 6 1 6.0 0 13. The matrix is not invertible.44HELM (2015):Workbook 30: Introduction to Numerical Methods

Answers 4. Verification is done by a direct multiplication to show that GG 1Using the 1-norm we find that κ1 (G) kGk1 kG 1 k1 10 215 1 0 0 0 1 0 .0 0 1 42.5.(a) The inverse of the coefficient matrix is 1 13 1043 10000 .113 2 104 219997 2Using the 1-norm the condition number of the coefficient matrix is(3 104 ) 1(1 104 ) 5002.7519997to 6 significant figures. This is a large condition number, and the given problem is notwell-conditioned.(b) Now we multiply the top equation through by 10 4 so that the system of equationsbecomes 4 101x11 23x23and the inverse of this new coefficient matrix is 113 13 1 .3 10 4 2 2 10 41.9997 2 .0001Using the 1-norm again we find that the condition number of the new coefficient matrixis4 1(5) 10.00151.9997to 6 significant figures. This much smaller condition number implies that the secondproblem is better conditioned.HELM (2015):Section 30.4: Matrix Norms45

Iterative Methods forSystems of Equations 30.5IntroductionThere are occasions when direct methods (like Gaussian elimination or the use of an LU decomposition) are not the best way to solve a system of equations. An alternative approach is to use aniterative method. In this Section we will discuss some of the issues involved with iterative methods.' PrerequisitesBefore starting this Section you should . . .Learning OutcomesOn completion you should be able to . . .46 revise determinants revise matrix norms&#" revise matrices, especially the material in8% approximate the solutions of simplesystems of equations by iterative methods assess convergence properties of iterativemethodsHELM (2015):Workbook 30: Introduction to Numerical Methods!

1. Iterative methodsSuppose we have the system of equationsAX B.The aim here is to find a sequence of approximations which gradually approach X. We will denotethese approximationsX (0) , X (1) , X (2) , . . . , X (k) , . . .where X (0) is our initial “guess”, and the hope is that after a short while these successive iterateswill be so close to each other that the process can be deemed to have converged to the requiredsolution X.Key Point 10An iterative method is one in which a sequence of approximations (or iterates) is produced. Themethod is successful if these iterates converge to the true solution of the given problem.It is convenient to split the matrix A into three parts. We writeA L D Uwhere L consists of the elements of A strictly below the diagonal and zeros elsewhere; D is a diagonalmatrix consisting of the diagonal entries of A; and U consists of the elements of A strictly abovethe diagonal. Note that L and U here are not the same matrices as appeared in the LUdecomposition! The current L and U are much easier to find.For example 0 03 00 43 4 2 00 10 02 1 {z } {z } {z } {z } A L D Uand 0 0 02 0 00 6 12 6 1 3 2 0 3 0 0 0 2 0 0 0 0 4 1 00 0 70 0 04 1 7{z} {z} {z} {z} A L D U HELM (2015):Section 30.5: Iterative Methods for Systems of Equations47

and, more generally for 3 3 matrices 0 0 0 0 00 0 0 0 0 0 0 . 00 0 0 0 0{z}{z}{z}{z} A L D U.The Jacobi iterationThe simplest iterative method is called Jacobi iteration and the basic idea is to use the A L D U partitioning of A to write AX B in the formDX (L U )X B.We use this equation as the motivation to define the iterative processDX (k 1) (L U )X (k) Bwhich gives X (k 1) as long as D has no zeros down its diagonal, that is as long as D is invertible.This is Jacobi iteration.Key Point 11The Jacobi iteration for approximating the solution of AX B where A L D U is givenbyX (k 1) D 1 (L U )X (k) D 1 BExample 18 Use the Jacobi iteration to approximate the solution X x1 x2 ofx3 8 2 4x1 16 3 5 1 x2 4 .2 1 4x3 12 0(0) Use the initial guess X 0 .048HELM (2015):Workbook 30: Introduction to Numerical Methods

Solution 8 0 00 2 4In this case D 0 5 0 and L U 3 0 1 .0 0 42 1 0First iteration.The first iteration is DX (1) (L U )X (0) B, or in full (1) (0) x1x18 0 00 2 4 16 16 (0) (1) 0 5 0 x2 3 0 1 x2 4 4 ,(1)(0)0 0 4 2 1 0 12 12x3x3(0)(0)(0)since the initial guess was x1 x2 x3 0.Taking this information row by row we see that(1)(1) 16)x1 2(1) 4)x2 0.8(1) 12)x3 38x15x24x3(1)(1) (1)x1 2 x(1) 0.8 as an approximation to X.2(1) 3x3 Thus the first Jacobi iteration gives us X (1)Second iteration.The second iteration is DX (2) (L U )X (1) B, or in full (1) (2) x1x1 160 2 48 0 0 (1) (2) 0 5 0 x2 3 0 1 x2 4 .(1)(2) 120 0 4 2 1 0x3x3Taking this information row by row we see that(2) 2x2 4x3 16 2(0.8) 4( 3) 16 5.6(2) 3x1 x3 4 3( 2) ( 3) 4 13(2) 2x1 x2 12 2( 2) 0.8 12 8.88x15x24x3(1)(1)(2))x1 0.7(1)(1))x2 2.6(2)(1)(1))x3 2.2(2) Therefore the second iterate approximating X is X (2)HELM (2015):Section 30.5: Iterative Methods for Systems of Equations (2)x1 0.7 2.6 . x(2) 2(2) 2.2x349

Solution (contd.)Third iteration.The third iteration is DX (3) (L U )X (2) B, or in full (3) (2) x1x18 0 00 2 4 16 (2) (3) 0 5 0 x2 3 0 1 x2 4 (3)(2)0 0 4 2 1 0 12x3x3Taking this information row by row we see that(3) 2x2 4x3 16 2(2.6) 4( 2.2) 16 12.4(3) 3x1 x3 4 3( 0.7) (2.2) 4 8.3(3) 2x1 x2 12 2( 0.7) 2.6 12 13.28x15x24x3(2)(2)(3))x1 1.55(2)(2))x2 1.66(3)(2)(2))x3 3.3(3) (3)x1 1.55 1.66 . x(3) 2(3) 3.3x3 Therefore the third iterate approximating X is X (3)More iterations .Three iterations is plenty when doing these calculations by hand! But the repetitive nature of theprocess is ideally suited to its implementation on a computer. It turns out that the next few iteratesare X (4) 0.765 2.39 , 2.64 X (5) 1.277 1.787 , 3.215 X (6) 0.839 2.209 , 2.808 (20)x1 0.9959 2.0043 , to 4 d.p. After about 40to 3 d.p. Carrying on even further X (20) x(20) 2(20) 2.9959x3iterations successive iterates are equal to 4 d.p. Continuing the iteration even further causes theiterates to agree to more and more decimal places. The method converges to the exact answer 1X 2 . 3The following Task involves calculating just two iterations of the Jacobi method.50HELM (2015):Workbook 30: Introduction to Numerical Methods

TaskCarry out two iterations of the Jacobi method to approximate the solution of 4 1 1x11 1 4 1 x2 2 1 1 4x33 1(0) with the initial guess X 1 .1Your solutionFirst iteration:AnswerThe first 4 00DX (1) (L U )X (0) B, that is, (0) (1)x1x110 1 1 (0) (1) x2 1 0 1 x2 2(0)(1)1 1 03x3x3 0.75from which it follows that X (1) 1 .1.25iteration is 0 0 4 0 0 4Your solutionSecond iteration:HELM (2015):Section 30.5: Iterative Methods for Systems of Equations51

AnswerThe second iteration is DX (1) (L U )X (0) B, that is, (2) (0) x1x14 0 00 1 11 (0) (2) 0 4 0 x2 1 0 1 x2 2(2)(0)0 0 41 1 03x3x3 0.8125 .1from which it follows that X (2) 1.1875Notice that at each iteration the first thing we do is get a new approximation for x1 and then wecontinue to use the old approximation to x1 in subsequent calculations for that iteration! Only atthe next iteration do we use the new value. Similarly, we continue to use an old approximation to x2even after we have worked out a new one. And so on.Given that the iterative process is supposed to improve our approximations why not use the bettervalues straight away? This observation is the motivation for what follows.Gauss-Seidel iterationThe approach here is very similar to that used in Jacobi iteration. The only difference is that we usenew approximations to the entries of X as soon as they are available. As we will see in the Examplebelow, this means rearranging (L D U )X B slightly differently from what we did for Jacobi.We write(D L)X U X Band use this as the motivation to define the iteration(D L)X (k 1) U X (k) B.Key Point 12The Gauss-Seidel iteration for approximating the solution of AX B is given byX (k 1) (D L) 1 U X (k) (D L) 1 BExample 19 which follows revisits the system of equations we saw earlier in this Section in Example18.52HELM (2015):Workbook 30: Introduction to Numerical Methods

Example 19 Use the Gauss-Seidel iteration to 8 2 4x1 16 3 5 1 x2 4 .2 1 4x3 12 x1approximate the solution X x2 of x30(0) Use the initial guess X 0 .0Solution 8 0 00 2 4In this case D L 3 5 0 and U 0 0 1 .2 1 40 0 0First iteration.The first iteration is (D L)X (1) U X (0) B, or in full (0) (1) x1x1 16 160 2 48 0 0 (0) (1) 3 5 0 x2 0 0 1 x2 4 4 ,(0)(1) 12 120 002 1 4x3x3 (0)(0)(0)since the initial guess was x1 x2 x3 0.Taking this information row by row we see that(1) 16(1)(1) 4 ) 5x2 3( 2) 4(1)(1) 12 ) 4x3 2( 2) 2 128x13x2 5x2(1)2x1 x2 4x3(1)(1)(1))x1 2)x2 2)x3 2.5(1)(1)(Notice how the new approximations to x1 and x2 were used immediately after they were found.) Thus the first Gauss-Seidel iteration gives us X (1) (1)x1 2 2 as an approximation to x(1) 2(1) 2.5x3X.HELM (2015):Section 30.5: Iterative Methods for Systems of Equations53

SolutionSecond iteration.The second iteration is (D L)X (2) U X (1) B, or in full (2) (1) x1x18 0 00 2 4 16 (1) (2) 3 5 0 x2 0 0 1 x2 4 (2)(1)2 1 40 00 12x3x3Taking this information row by row we see that(2) 2x2 4x3 16(2)(2)(2)(2)8x13x1 5x2(2)2x1 x2 4x3(1)(1)(2))x1 1.25 x3 4)x2 2.05 12)x3 2.8875(1)(2)(2) (2)x1 1.25 2.05 . x(2) 2(2) 2.8875x3 Therefore the second iterate approximating X is X (2)Third iteration.The third iteration is (D L)X (3) U X (2) B, or in full (2) (3) x1x 160 2 48 0 01 (2) (3) 3 5 0 x2 0 0 1 x2 4 .(2)(3) 120 002 1 4x3x3Taking this information row by row we see that(2) 2x2 4x3 16(3)(3)(3)(3)3x1 5x2(3)(2)(3)8x12x1 x2 4x3(3))x1 1.0687 x3 4)x2 2.0187 12)x3 2.9703(2)(3)(3)to 4 d.p. Therefore the third iterate approximating X is (3) x1 1.0687 2.0187 .X (3) x(3) 2(3) 2.9703x3More iterations .Again, there is little to be learned from pushing this further by hand. Putting the procedure on acomputer and seeing how it progresses is instructive, however, and the iteration continues as follows:54HELM (2015):Workbook 30: Introduction to Numerical Methods

X (4) 1.0195 2.0058 , 2.9917 X (5) X (7) 1.0005 2.0001 , 2.9998 1.0056 2.0017 , 2.9976 X (6) X (8) 1.0001 2.0000 , 2.9999 1.0016 2.0005 , 2.9993 X (9) 1.0000 2.0000 3.0000(to 4 d.p.). Subsequent iterates are equal to X (9) to this number of decimal places. The Gauss-Seideliteration has converged to 4 d.p. in 9 iterations. It took the Jacobi method almost 40 iterations toachieve this!TaskCarry out two iterations of the Gauss-Seidel method to approximate the solutionof 4 1 1x11 1 4 1 x2 2 1 1 4x33 1with the initial guess X (0) 1 .1Your solutionFirst iterationAnswerThe first iteration is (D L)X (1) U X (0) B, that is, (0) (1) x1x140 00 1 11 (0) (1) 1 4 0 x2 0 0 1 x2 2(0)(1)3 1 1 40 0 0x3x3 0.75from which it follows that X (1) 0.9375 .1.1719HELM (2015):Section 30.5: Iterative Methods for Systems of Equations55

Your solutionSecond iterationAnswerThe second 4 1 1iteration is (D L)X (1) U X (0) B, that is, (2) (1) x1x10 00 1 11 (2) (1) 4 0 x2 0 0 1 x2 2(2)(1) 1 40 0 03x3x3 0.7773from which it follows that X (2) 0.9873 .1.19122. Do these iterative methods always work?No. It is not difficult to invent examples where the iteration fails to approach the solution of AX B.The key point is related to matrix norms seen in the preceding Section.The two iterative methods we encountered above are both special cases of the general formX (k 1) M X (k) N.1. For the Jacobi method we choose M D 1 (L U ) and N D 1 B.2. For the Gauss-Seidel method we choose M (D L) 1 U and N (D L) 1 B.The following Key Point gives the main result.Key Point 13For the iterative process X (k 1) M X (k) N the iteration will converge to a solution if the normof M is less than 1.56HELM (2015):Workbook 30: Introduction to Numerical Methods

Care is required in understanding what Key Point 13 says. Remember that there are lots of differentways of defining the norm of a matrix (we saw three of them). If you can find a norm (any norm)such that the norm of M is less than 1, then the iteration will converge. It doesn’t matter if thereare other norms which give a value greater than 1, all that matters is that there is one norm that isless than 1.Key Point 13 above makes no reference to the starting “guess” X (0) . The convergence of the iterationis independent of where you start! (Of course, if we start with a really bad initial guess then we canexpect to need lots of iterations.)TaskShow that the Jacobi iteration used to approximate the solution of 4 1 1x11 1 5 2 x2 2 1 02x33is certain to converge. (Hint: calculate the norm of D 1 (L U ).)Your solutionAnswerThe Jacobi iteration matrix is 1 0 1 10.25000 1 14 0 0 D 1 (L U ) 0 5 0 1 0 2 0 0.2 0 1 0 2 0 0 21 0 0000.51 0 0 00.25 0.250.4 0.2 00.500and the infinity norm of this matrix is the maximum of 0.25 0.25, 0.2 0.4 and 0.5, that isk D 1 (L U )k 0.6which is less than 1 and therefore the iteration will converge.HELM (2015):Section 30.5: Iterative Methods for Systems of Equations57

Guaranteed convergenceIf the matrix has the property that it is strictly diagonally dominant, which means that the diagonalentry is larger in magnitude than the absolute sum of the other entries on that row, then both Jacobiand Gauss-Seidel are guaranteed to converge. The reason for this is that if A is strictly diagonallydominant then the iteration matrix M will have an infinity norm that is less than 1.A small system is the subject of Example 20 below. A large system with slow convergence is thesubject of Engineering Example 1 on page 62.Example 20 4 1 1Show that A 1 5 2 is strictly diagonally dominant. 1 02SolutionLooking at the diagonal entry of each row in turn we see that4 1 1 2 5 1 2 32 1 0 1and this means that the matrix is strictly diagonally dominant.Given that A above is strictly diagonally dominant it is certain that both Jacobonverge.What’s so special about strict diagonal dominance?In many applications we can be certain that the coefficient matrix A will be strictly diagonallydominant. We will see examples of this in32 and33 when we consider approximatingsolutions of differential equations.58HELM (2015):Workbook 30: Introduction to Numerical Methods

Exercises1. Consider the system 2 1x12 1 2x2 5(0) 1 1 (a) Use the starting guess X in an implementation of the Jacobi method to 1.5show that X (1) . Find X (2) and X (3) . 3 1(0)(b) Use the starting guess X in an implementation of the Gauss-Seidel method 11.5to show that X (1) . Find X (2) and X (3) . 3.25 (Hint: it might help you to know that the exact solution is2. (a) Show that the Jacobi iteration 5 1 00x1 1 5 1 0 x2 0 1 5 1 x300 1 5x4 3.) 4applied to the system 7 10 6 16can be written X (k 1)x1x2 0 0.2 00 0.2 0 0.2 0 (k) 0 0.2 0 0.2 X 00 0.2 0 1.4 2 . 1.2 3.2(b) Show that the method is certain to converge and calculate the first three iterations usingzero starting values. 1 2 (Hint: the exact solution to the stated problem is 1 .)3HELM (2015):Section 30.5: Iterative Methods for Systems of Equations59

Answers1. (a)(1)(0)2x1 2 1x2 2(1)and therefore x1 1.5(1)(0)2x2 5 1x1 6(1)which implies that x2 3. These two values give the required entries in X (1) . Asecond and third iteration follow in a similar way to give 2.52.625(2)(3)X andX 3.25 3.75(b)(1)(0)2x1 2 1x2 3(1)and therefore x1 1.5. This new approximation to x1 is used straight away when(1)finding a new approximation to x2 .(1)(1)2x2 5 1x1 6.5(1)which implies that x2 3.25. These two values give the required entries in X (1) . Asecond and third iteration follow in a similar way to give 2.6252.906250(2)(3)X andX 3.8125 3.953125where X (3) is given to 6 5 0 0 52. (a) In this case D 0 00 0decimal places 0 00.2 000 0 0 00.200 and therefore D 1 05 0 0 0.2 00 5000 0.2 . 0 1 000 0.2 00 1 0 1 0 0.2 0 0.2 0 So the iteration matrix M D 1 0 1 0 1 0 0.2 0 0.2 00 1 000 0.2 0and that the Jacobi iteration takes the form 70 0.2 00 10 0.2 0 0.2 0X (k 1) M X (k) M 1 6 0 0.2 0 0.21600 0.2 0 1.4 (k) 2 X 1.2 3.2as required.60HELM (2015):Workbook 30: Introduction to Numerical Methods

Answers2(b)(0)(0)(0)(0)Using the starting values x1 x2 x3 x4 0, the first iteration of the Jacobi methodgivesx11x12x13x14 0.2x02 1.4 1.40.2(x01 x03 ) 2 20.2(x02 x04 ) 1.2 1.20.2x03 3.2 3.2The second iteration isx21x22x23x24 0.2x12 1.4 10.2(x11 x13 ) 2 1.960.2(x12 x14 ) 1.2 0.960.2x13 3.2 2.96And the third iteration isx31x32x33x34 0.2x22 1.4 1.0080.2(x21 x23 ) 2 1.9920.2(x22 x24 ) 1.2 10.2x23 3.2 3.008HELM (2015):Section 30.5: Iterative Methods for Systems of Equations61

Engineering Example 1Detecting a train on a trackIntroductionOne means of detecting trains is the ‘track circuit’ which uses current fed along the rails to detectthe presence of a train. A voltage is applied to the rails at one end of a section of track and a relay isattached across the other end, so that the relay is energised if no train is present, whereas the wheelsof a train will short circuit the relay, causing it to de-energise. Any failure in the power supply or abreakage in a wire will also cause the relay to de-energise, for the system is fail safe. Unfortunately,there is always leakage between the rails, so this arrangement is slightly complicated to analyse.Problem in wordsA 1000 m track circuit is modelled as ten sections each 100 m long. The resistance of 100 m of onerail may be taken to be 0.017 ohms, and the leakage resistance across a 100 m section taken to be30 ohms. The detecting relay and the wires to it have a resistance of 10 ohms, and the wires fromthe supply to the rail connection have a resistance of 5 ohms for the pair. The voltage applied atthe supply is 4V . See diagram below. What is the current in the relay?5 ohm0.017 ohmi4i5i6i7i8i9i100.017 ohmi1130 ohmi330 ohm30 ohm4 voltsi20.017 ohm30 ohmi10.017 ohm0.017 ohmrelay and wires10 ohmFigure 1Mathematical statement of problemThere are many ways to apply Kirchhoff’s laws to solve this, but one which gives a simple set ofequations in a suitable form to solve is shown below. i1 is the current in the first section of rail (i.e.the one close to the supply), i2 , i3 , . . . i10 , the current in the successive sections of rail and i11 thecurrent in the wires to the relay. The leakage current between the first and second sections of railis i1 i2 so that the voltage across the rails there is 30(i1 i2 ) volts. The first equation belowuses this and the voltage drop in the feed wires, the next nine equations compare the voltage dropacross successive sections of track with the drop in the (two) rails, and the last equation comparesthe voltage drop across the last section with that in the relay wires.30(i1 i2 ) (5.034)i1 430(i1 i2 ) 0.034i2 30(i2 i3 )30(i2 i3 ) 0.034i2 30(i3 i4 ).30(i9 i10 ) 0.034i10 30(i10 i11 )30(i10 i11 ) 10i1162HELM (2015):Workbook 30: Introduction to Numerical Methods

These can be reformulated in matrix form as Ai v, where v is the 11 1 column vector with firstentry 4 and the other entries zero, i is the column vector with entries i1 , i2 , . . . , i11 and A is thematrix A 0000000-3040 Find the current i1 in the relay when the input is 4V , by Gaussian elimination or by performing anL-U decomposition of A.Mathematical analysisWe solve Ai v as above, although actually we only want to know i11 . Letting M be the matrix Awith the column v added at the right, as in Section 30.2, then performing Gaussian elimination onM , working to four decimal places gives M 12.0296from which we can calculate that the solution i is 0.5356 0.4921 0.4492 0.4068 0.3649 0.3234i 0.2822 0.2414 0.2008 0.1605 0.1204so the current in the relay is 0.1204 amps, or 0.12 A to two decimal places.You can alternatively solve this problem by an L-U decomposition by finding matrices L and U suchthat M LU . Here we haveHELM (2015):Section 30.5: Iterative Methods for Systems of 6981.81051.67331.55381.4487

L 00000001.0000-0.932300000000001.0000 and U 12.0295 4.00000.5352 3.4240 0.4917 2.9892 0.4487 2.6514 0.4064 2.3783 0.3644 Therefore U i 2.1547 and hence i 0.3230 and again the current is found to be 0.12 amps. 1.9673 0.2819 1.8079 0.2411 1.6705 0.2006 1.5519 0.1603 1.44640.1202Mathematical commentYou can try to solve the equation Ai v by Jacobi or Gauss-Seidel iteration but in both cases it willtake very many iterations (over 200 to get four decimal places). Convergence is very slow because thenorms of the relevant matrices in the iteration are only just less than 1. Convergence is neverthelessassured because the matrix A is diagonally dominant.64HELM (2015):Workbook 30: Introduction to Numerical Methods

Index for Workbook 30Augmented matrixBack substitution1313, 26Norm- 1-norm- Euclidean- infinityConditioning8, 41Condition number40Convergence of iterative methods 56-58Partial pivotingPivotingDiagonal dominance58Quadratic equation5Rounding - down- error- upError bondsForward substitutionGaussian eliminationGauss-Seidel iterationIll-conditioningIterative methodsJacobi iteration2512-20, 6252846-6448Lower triangular matrixLU decomposition2221-33Matrix222534-4513, 22- lower triangular- LU form- norm- upper triangularSimultaneous equationsTrain on trackUpper triangular matrixWell-conditioning36383715-20156-835313, 25, 466213, 228EXERCISES10, 19, 30, 43, 59ENGINEERING EXAMPLES1 Detecting a train on a track62

3 0:3333 rounded to 4 decimal places, 8 3 2:66667 rounded to 5 decimal places, ˇ 3:142 rounded to 3 decimal places. Sometimes the phrases \signi cant gures" and \decimal places" are abbreviated as \s.f." or \sig. g." and \d.p." respectively. Example 1 Write down

Related Documents: