A Least-square Distance Curve-fitting Technique - NASA

1y ago
7 Views
2 Downloads
996.06 KB
27 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

N-ASATN D-6374--N A S A TECHNICAL NOTEI'ccC, /whM ?nzcA LEAST-SQUARE-DISTANCECURVE-FITTING TECHNIQUEby John Qs HowellLangley Research CenterHampton, Vas 23365NATIONAL AERONAUTICS AND SPACE ADMINISTRATIONWASHINGTON,D. C.JULY 1971

TECH LIBRARY KAFB, NM1ll1Il111l11ll111lllll1IIl1lIHII- 1. Report No.2. Government Accession No.3. Recipient's Catalog No.NASA TN D-63744. Title and Subtitle5. Report DateJuly 1971organizationCodeA LEAST-SQUARE-DISTANCE CURVE-FITTING TECHNIQUE6. performing7. Author(s)8. Performing Organization Report No.L-7675John Q. Howell10. Work Unit No.9. Performing Organization Name and Address125-21-21-0111. Contract or Grant No.NASA Langley R e s e a r c h CenterHampton, Va. 2336513. Type of Report and Period Covered-2. Sponsoring Agency Name and AddressTechnical NoteNational Aeronautics and Space AdministrationWashington, D.C. 2054614. Sponsoring Agency Code6. AbstractA method is presented f o r fitting a function with n p a r a m e t e r s y f(al,a2,. . .,CY,; x)t o a s e t of N data pointsin a manner that minimizes the s u m of the s q u a r e s of thedistances f r o m the data points t o the curve. A differential-correction scheme is used t o solvef o r the p a r a m e t e r s in an iterative manner until the best f i t is obtained. Two methods f o rfinding the distances f r o m the data points t o the curve and a listing of the curve-fitting com puter program are a l s o given.Gi,f [7. Key Words (Suggested by Authoris))T8. Distribution StatementCurve fittingLeast-square distanceLeast s q u a r e s9. Security Classif. (of this report)Unclassified-.UnclassifiedI20. Security Classif. (of this page)Unclassified- Unlimited21. No. of Pager2422. Price* 3.00

CONTENTS.INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .SYMBOLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .DERIVATION AND DISCUSSION O F NEW TECHNIQUE . . . . . . . . . . . . . . . .EXAMPLES O F APPLICATION O F NEW TECHNIQUE . . . . . . . . . . . . . . . .SUMMARYCONCLUDING REMARKS.Page112369APPENDIX A .TWO NUMERICAL METHODS FOR FINDING THE DISTANCEFROM A POINT TO A CURVE13APPENDIX B .COMPUTER PROGRAM FOR l.l '.

A LEAST-SQUARE-DISTANCE CURVE-FITTING TECHNIQUEBy John Q. HowellLangley Research CenterSUMMARYA method is presented for fitting a function with n parametersy f(al,a2,.,an; x) to a set of N data points {Gi,yi)in a manner that mini mizes the sum of the squares of the distances from the data points t o the curve. Adifferential-correction scheme is used to solve for the parameters in an iterative man ner until the best f i t is obtained. Two methods for finding the distances from the datapoints to the curve and a listing of the curve-fitting computer program a r e also given.INTRODUCTIONMost of the generally used methods of fitting a curve to a set of data points mini mize a function of the vertical distances from the points to the curve. For example, if{Zi,ya is a set of N points and y f(al,a2,.,an; x) is a curve with n, param e t e r s , then the method of least squares gives values of the parameters that minimize.Ni 1This may be done by taking partial derivatives with respect t o the parameters and settingeach of the resulting equations equal to zero; that is,-aE 0 -2aa-N( j 1, 2 , . ., n)i 1This set of n equations, sometimes called the normal equations, is then solvedfor the parameters. As is well known, if f is linear in the parameters, for example, apolynomial in x (ref. l),a s e t of simultaneous linear equations merely has t o be solved.However, in general, more complicated functions yield simultaneous equations that areI-.-

nonlinear. In this case f may be expanded in a truncated Taylor s e r i e s about a pointin parameter space, and in this manner the nonlinear normal equations can be linearizedand solved by iteration. The end result is a s e t of parameters that yields a minimum ofequation (1). This is called either the Gauss-Newton method o r the method of nonlinearleast squares. However, when f(x) has a region where its derivative is large o r whenbothand yi have s i m i l a r e r r o r bounds, it may be more desirable t o minimize thedistance from each data point t o its nearest point on the curve. This minimum distanceis the same as the perpendicular distance from the data point t o the curve. Scarborough(ref. 2) gives a method for curve fitting that minimizes the sum of the squares of thesedistances but his method is limited to first-order polynomials. Reed (ref. 3) and Kendalland Stuart (ref. 4) give schemes that are applicable to polynomials of higher order.These same methods a r e useful f o r any function in which the parameters enter in a linearfashion. Guest (ref. 5) describes a related technique that minimizes the perpendiculardistance from each data point t o a straight line tangent to the curve. This tangent istaken at the point on the curve having the same x-coordinate as the data point.ziThe purpose of the present work is to derive and demonstrate the use of a curvefitting technique that minimizes the least-square distances from each data point to thecurve. The technique described herein works for a general function f and is mostuseful when the function being fitted contains regions where the slope is small as well aswhere the slope is large. It is also useful when the data points have e r r o r bounds asso ciated with both the x- and y-coordinates. In the latter case, this technique implicitlyassumes identical e r r o r f o r both coordinates. Generally the data points and the functioncan be scaled s o that this condition is met. Other techniques that a r e l e s s time con suming may also be used in these situations. For example, a judicious choice of weightsoften makes possible the use of a standard least-squares procedure. However, in thesecases a particular choice of weights seldom works for more than a few s e t s of data. Thetechnique described herein does not have this disadvantage since it provides a fit evenwhen all the weights a r e set equal to 1.SYMBOLSdefined by equation (10)defined by equation (9)distance from (%i,Yi)t o nearest point on curve, equations (4) and (16)value of Di using old parameters, equation (6)2

distance from (?i,fi) to some point on curve, equation (17)sum of squares of distances f r o m data points t o curve, equations (1) and (3)function to be fitted t o data points; y f(x)value of function at iii, equation (2)value of function at xi with old parameters, equation (8)number of function parametersnumber of data pointsweight associated with data point(zi,yi)coordinates of one data point of set to which y f(x) is being fittedcoordinates of point on curve nearest data point (Zi,ui)*.,anA.,anp a r a m e t e r s of y f(x)old parameters of y f(x) during iteration t o find least-square distance fitroot-mean-square deviation of data points from curve, equation (14)DERIVATION AND DISCUSSION O F NEW TECHNIQUEt o which is t o be fitted the functionThere is given a set of N data points (Si,? y f(al,a2,.,an; x), where a l , a 2 , . .,an a r e parameters. T o obtain this f i t thesum of the least-square distances (sum of the squares of the shortest distances) fromthe data points to the curve must be minimized. By using one of the techniques given inappendix A , the coordinates (xi,yi) a r e found f o r the point on the curve that is nearesteach data point (.zi,yi). Then it is desired to minimize.3

where the distance from the ith data point t o the curve isThe weight t o be associated with each point is given byis necessary t o solve the s e t of n normal equationsWi.To minimize equation (3) it( j 1, 2 , . ., n)(5)In general this is a set of nonlinear simultaneous equations.To solve the s e t , aniterative procedure sometimes called the method of differential correction can be used.First a Taylor s e r i e s expansion of Di is made about some point ( & 1 , 6 , .,&) inparameter space. Since Di is a known function of the parameters, the expansion caneasily be written.andNow in equation (6) all higher order t e r m s a r e dropped and only the t e r m s that a r e linearin Aa a r e kept. Then substitution of equation (6) into equation (5) gives(j 1, 2,Also, equation (4) yieldswhere4. . ., n)

Now, by definition,andand equation (7) then becomesnBj 1(j 1, 2 ,Ajk ACYkk 1. . ., n)(11).The equations needed t o f i t the curve y f(al,a2,.,an; x ) t o the set ofN points {?i,y have now been derived. To use this procedure, a starting point inparameter space is first chosen and designated (&1,g2,. . .,&n). Next the distancesfrom each data point t o the curve a r e found by using perhaps one of the techniques out lined in appendix A. Then Bj and Ajk are found from equations (9) and (10) and thesimultaneous linear equations in equations (11) a r e solved for the quantitiesA a l , ha2,., A@,. Lastly, the new parameters a r e obtained f r o m.( k 1, 2,. . ., n)(12)This s e t of new parameters is then used as a new starting point and the cycle repeated.This iteration is carried out until (ilk converges ( h a k a k ) o r until it is obvious thata convergence will not be achieved. In the latter case a better starting point in param eter space generally leads t o convergence. It should be pointed out that the point inparameter space t o which equations (9) t o (11) converge may not be an absolute minimum,that is, the best fit. The end point of the process may be either a relative maximum o ra relative minimum. The f o r m e r case is rather unlikely but at any rate is easily detectedby inspection of the value of equation (3) after each iteration. In the latter case a newstarting point in parameter space must be chosen t o see if convergence is achieved t o apoint where a smaller value of equation (3) is obtained. Unfortunately, it is in generaldifficult t o tell when the best f i t has been found, but once a f i t sufficient for the particularneed is located, it is not necessary t o s e a r c h farther.5

EXAMPLES O F APPLICATION O F NEW TECHNIQUEThe two examples given herein a r e chosen t o demonstrate that for some casesthe least-square-distance curve-fitting technique gives better results than the standardleast-squares method, The first example arose when the author was trying t o reducesome experimental plasma-physics data and led ultimately to the least-square-distancecurve-fitting technique described in this paper, The second example is chosen since it iscommonly known that standard least-squares procedures do not work well on this typeof function. For the examples presented here the weights a r e set equal to one. By prop e r l y choosing the weights it may be possible to obtain a f i t with the least-squares tech nique that is as good as that obtained with the least-square-distance method. However,for a different function and often for a different set of data points, a new set of weightswould have to be chosen to achieve a good f i t again. The least-square-distance techniquedescribed in this report does not have this disadvantage.Because of the extra computations involved in finding the closest point on the ‘curve,the least-square-distance method takes more computer time than the least-squaresmethod. Based on the following two examples it is determined that the least-square distance method is longer by a factor of approximately 2.5.Example IExample I is taken from the field of plasma physics where a common diagnostictool is the Langmuir probe. The current versus voltage characteristic of this probe isgiven approximate1y byy a1x2 a 2 x a3 a 4 e5x.where x is the voltage, y is the current, and a1,a2, . .,a5 a r e a set of adjustableparameters. The parameters a 4 and a 5 a r e always positive so the exponential t e r mis large for x positive and small f o r x negative.When the Langmuir probe is used as a diagnostic tool, the current is typically mea sured for a large set of voltage points. Then some curve-fitting technique is used t oobtain a f i t to the experimental data. The value of a5 is of interest as the electrontemperature can be obtained from it. This temperature is then used to calculate a par ticular voltage in the region where the exponectial t e r m is small and the fitted functionis used to obtain the corresponding current. The ion density can then be obtained fromthis current. From this description it is apparent (1) that both x and y may be ine r r o r and (2) the f i t to the experimental data must be good both in regions where theexponential t e r m is large and where it is small.6

Figure 1 shows the f i t that is obtained in one particular case by using the nonlinear least-squares technique. The values of the parameters and the weighted root-mean square deviation obtained by the curve-fitting schemes a r e shown in the legend of the fig ure. The weighted root-mean-square deviation is defined by(T tIIIII543y .21y f(x1 \O/o /0-1I-5I-41II-3-2-1012XFigure 1.- Nonlinear-least-squares fit used with equation ( 1 3 ) . -1.424;u4 3.750;as 2.028; -0.2034;a2 -1.166;u 3.3.7

where Di is the vertical distance in the case of the least-squares technique and theshortest distance from the point to the curve for the least-square-distance method. Theleast-squares f i t f o r x negative is not acceptable (fig. 1) but the fit is good for x posi tive where the exponential t e r m is large.The same set of data points is then used in a program based on the least-square distance technique derived in the present paper. The result of this f i t is shown in fig u r e 2. It is immediately apparent that the fit’is much better and is in fact good enought o extract the desired information for the further data analysis as described earlier.The legends of figures 1 and 2 show that only a5 agrees t o within 20 percent. A s- 1II543Y210-1 J-5II-4-3II-1-210112XFigure 2.- Least-square-distance fit used with equation (13) and same data pointsas in figure 1. al 0.05013; a2 0.3994; a3 0.5654; a4 2.398;9 2.2611;8G 0.077.

would be expected from a comparison of figures 1 and 2, the coefficients of the polyno mial portion of the function are in violent disagreement. The reason the least-squarestechnique does not do so well is that it finds a f i t that is good in the high-slope and highmagnitude region at the expense of the f i t in the small-slope and small-magnitude region.Example 11Example II is chosen t o show that the least-square-distance curve-fitting techniquefits functions with a singularity. The function chosen isy a1x3a5 a 2 x 2 a 3 x a4 x - 25In figure 3 the results of the nonlinear-least-squares curve-fitting scheme a r eshown. The values of the parameters and the weighted root-mean-square deviation ofthe points from the curve are shown in the legend of the figure. The least-square distance fit of the same function to the same points is shown in figure 4. A s in example I ,the f i t t o the small-magnitude points is better when the least-square-distance techniqueis used while both methods give s i m i l a r fits f o r the large-magnitude points. Inexample I1 the initial parameter guess for the least-square-distance method is morecritical than usual. With a bad initial guess both distance-finding techniques describedin appendix A sometimes achieve convergence t o a point on the wrong side of the singu larity. This can also happen if the fitting function has a very sharp peak, in which casethe distance-finding scheme may achieve convergence to a point on the wrong side of thepeak. Of course, if it is desired t o f i t a function of this type t o several s e t s of data, theprogram can be designed to alleviate this problem, but the fitting routine has to be dif ferent for each particular function.CONCLUDING REMARKSIn the present paper the least-square-distance curve-fitting method is derived andexamples of its use a r e presented. This technique fits a function with n parametersy f{al,a2,.,an; x) t o a set of N data pointsby minimizing the sum ofthe squares of the distances from the data points t o the curve. A differential-correctionscheme is used t o solve for the parameters in an iterative manner until the best f i t isobtained. Two examples of the use of this technique a r e presented, both involving func tions having large slope variations. In both cases the least-squares f i t is found to belacking when compared t o the least-square-distance fit. It is found that the least-squarestechnique fits the curve t o points in the regions of large slope and large magnitude at theexpense of the f i t in regions of small slope and small magnitude. This does not happenfor the least-square-distance method presented in this paper since the sum of the squares.ei,y 9

14I120(XtY)1086042Y-OP-27y’lo-4-6-8-10-1 2-1 4I010I20403050Figure 3 . - Nonlinear-least-squares fit used with equation (15). a1 10.-. .-0.06442; a3 0.9939; a4 3.886; a5 -12.75;0 O.OOO8l9;2.6.

102ox304050Figure 4.- Least-square-distance fit used with e q u a t i a (155)and same data pointsas i n figure 3 . ul 0.000709;u2 -0.03129;u3 0.5789; “4 6.147;u5 -14.04;CI 0.21.11

of the distances from the data points t o the curve is minimized. Hence, for functions ofthis type the least-square-distance technique fits a function to a set of points more accu rately than the least-squares method, unless much time is spent in customizing theleast-squares weights t o the particular function and particular set of data.Langley Research Center,National Aeronautics and Space Administration,Hampton, Va., June 10, 1971.12

APPENDIX ATWO NUMERICAL METHODS FOR FINDING THE DISTANCEFROM A POINT TO A CURVEIn appendix A two methods a r e presented for finding the distance from the datapoint (Zi,yi) t o the curve y f(x). The first method minimizes the distance f r o m thecurve t o the data point, while the second method finds the perpendicular from the curveto the data point. Both these methods locate the point on the curve k i , f ( X i g nearestthe data point. The distance from the data point to the curve is then given byFor some very simple cases this point can be found analytically but the assumption ismade here that f(x) is of such complexity that this is impossible.Method IThe distance from some point on the curve y f(x) t o the data pointDi(x) (is([ - fd2 - %i)2)1'2f(x)(XNow an x such that Di is minimum may be found by solvingmi- dx0For the case of Di # 0 (for Di 0 , the trivial solution is xi seen from equation (18) that xi must satisfy the equationZi and yi yi), itisOnce xi is found, yi is obtained from yi f(xi), and equation (16) is used t o find Di.Equation (19) can be solved by any convenient method. For cases where the secondderivative of f(x) is obtainable, the author has used the Newton-Raphson method withgood success. It should be kept in mind that in some c a s e s the solution of equation (19)may yield a Di that is a relative maximum or a relative minimum instead of the abso lute minimum that is desired. Fortunately these c a s e s a r e rare.13

APPENDIXA- ConcludedMethod IIThe second method may be called the method of successive tangents and does notrequire higher derivatives of f(x). Consequently it is much more useful for complicatedfunctions. To use this method a point on the curve is initially chosen near where theclosest point is thought t o be. This initial guess may be designated Lki,f(xifl and astraight line fitted through this point tangent to the curve. This can be done by usingeither f(xi) and f ' ( X i ) or f(xi) and f(xi A x ) . In the latter case Ax is somesmall arbitrarily chosen increment. Once the straight line is found, a perpendicular isdropped to it from the data point (%i,fi). A better estimate of the closest point on thecurve is now obtained by letting the new X i be the x-coordinate of the foot of the per pendicular on the straight line. Then a second straight line tangent to the curve may befitted through the new point ki,f(xi)l. This process is repeated until two successivexi's agree to within some previously chosen increment. For cases where f'(x) iseasily obtained the author has used this scheme with good success. If the NewtonRaphson method is used with the first method and if f"(x) is zero these two methodsa r e equivalent.14

APPENDIX BCOMPUTER PROGRAM FOR LEAST-SQUARE -DISTANCE TECHNIQUEAppendix B contains a description and listing of a least-square-distance curvefitting program written in FORTRAN. The procedure for finding the distance from thedata point t o the curve is built into the curve-fitting subroutine. The method used is thesuccessive-tangent method described in appendix A. The curve-fitting subroutine alsohas a damping procedure (ref. 6) included for increased stability. This program hasoperated satisfactorily for the author with several different functions but has not beentested extensively.Main ProgramIt is felt that a description of the main program is not needed since any potentialu s e r has t o write the main program around his own particular application.Least-Square-Distance C:irve- Fitting SubroutineThis subroutine assumes the existence of a linear-simultaneous-equation solvercalled SIMSOL. It is called by the statementCall SIMSOL(A,B,M)and solves the equation in M unknowns given by AX B. The solution vector for X isreturned in B. The curve-fitting subroutine also calls the subroutine FUNC describedsubsequently. A description of the calling procedure for the curve-fitting subroutinefollows.Call LSD(X,Y,W,N,AL,M,ERR,RMS)Use:Y” Vectors containing x- and y-coordinates of data points t o which functionis being fitted.WVector containing weight associated with each point.NThe number of points being supplied t o subroutine by main program.ALVector containing values of function parameters. Initially a trial set mustbe supplied. The curve-fitting subroutine iterates and returns a betterset.MThe number of parameters in function being fitted.15I

APPENDIX B- ContinuedERRAn e r r o r criterion that must be supplied t o subroutine by main program.The subroutine iterates until RMSold - RMSnew ERR * RMSnew.RMSWeighted root-mean-square deviation of data points from curve defined bywhere Di is distance of ith point from nearest point on curve.Restrictions:(1) X,Y and W a r e all dimensioned 50 and hence N 2 50.(2) AL is dimensioned 10 and hence M 5 10.(3) A linear-simultaneous -equation solver must be provided asdescribed previously.(4) A subroutine called FUNC containing information about the functionmust be supplied. An example is described next.Description of Subroutine FUNCThe subroutine FUNC contains information about the function being fitted. Thissubroutine is called by the curve-fitting subroutine described previously. The subroutinelisting included herein is used to f i t equation (13) to a set of data points as shown in fig u r e 2 and is intended t o be an example of how this subroutine may be written.Call FUNC (X,Y ,N,AL , D E R ,DE R)Use:16XVector containing values of independent variable.YVector used to return values of dependent variable t o curve-fitting sub routine. For example, Y(1) must contain the value of the functionevaluated at X(1) for I 1 to I N.NThe number of X values being supplied t o subroutine. If N 1 only X(l)is supplied and the value of the function and its x-derivative must bereturned in Y(l) and XDER, respectively. For other values of N, bothY and DER must be filled and D E R need not be calculated.ALCurrent values of function parameters being supplied t o FUNC by LSD.

APPENDIX B- ContinuedXDERVariable containing value of x-derivative of function evaluated at X(1).It need be calculated only when N 1.DERM a t r i x containing derivatives of function with respect to all functionparameters, each evaluated at X(I), for I 1 to I N. This matrixmust be filled by FUNC whenever N 1. The defining equation iswhere aK is Kth function parameter.Restrictions:(1) X and Y a r e dimensioned 50 so N 6 50.(2) AL is dimensioned 10.(3) DER is dimensioned (10,50).17

APPENDIX B- ContinuedCCCCCCCCCCM A I N PROGRAMT H I S P R O G R A M R E A D S THE I N I T I A L G U E S S A T THE P A R A M E T E R S A N D THE D A T AP O l N T S AND THEN C A L L S THE L E A S T S Q U A R E D I S T A N C E C U R V E F I T T E R .CCPROGRAM C F I T ( I N P U T r 0 U T P U T )DIMENSION X ( 5 O ) . Y ( 5 0 ) . W ( 5 0 ) r A L ( l O )E R R 1 E-51 READ 1 0 0 0 r M 1 ( A L ( I ) r I l r M )1000 F O R M A T ( I 2 r / ( E I O ) )READ 10BlrN (X(I)rY(I)rW(I)rI l*N)3E10) )1001 F O R M A T ( I 2 / (CALL LSD(XrYrWrNrALrMrERRIRMS)GO T O 1ENDCCCCCCCL E A S T SQUARE D I S T A N C E CURVE F I T T I N G SUBROUTINEcCS U B R O U T I NE L S D ( X I Y W Nr A L r M ERR* R M S )CCCCCcCCCT H I S I S A L E A S T S Q U A R E OJSTANCE C U R V E F I T T I N G S U B R O U T I N E .I T H A S B U I L T I N THE S U C E S S f V E T A N G E N T L I N E S C H E M E T O FIND THED I S T A N C E FROM A D A T A P O I N T T O THE CURVE.I T CALLS S I M S O L ( A * B r M ) TO SOLVE THE L I N E A R SIMULTANEOUS EQUATIONA X B I N M UNKNOWNS.I T C A L L S FU N C T O O B T A I N I N F O R M A T I O N A B O U T T H E F U N C T I O N B E I N G F I T T E D .D!MENSI O N X 5 0 ) r Y 5 O r W 5 ) r A L l 0 ) r D 1 0 5 0 ) B 1 0 A 1D IMENS I O N D I S t 5 0 X 1 5 0 r W 2 5 0 r l D ( 2 r Y 1 ( S O ) . D X 5 0 T H E R E I S NO P R I N T E O O U T P U T FROM T H I S S U B R O U T I N E *I F IPRINT OIIPRINT OI PR I NT 2T R M S l *E10I D ( 1 ) 1 OHCONVERGEDID(E) IOHDO 2 I C l r N2 DX(I) O*CCSTART TTERATIOND O 100 I T E R l r 100CC18F I N D CLOSEST P O I N T S ON CURVE5 DO 30 I frNw 2 c I) w ( I P

APPENDIX BCCCCC-ContinuedFIND CLOSEST P O I N T GIVEN AN I N D I V I D U A L DATA P O I N TDO 10 K Z l . 5 0IF(ABS(W(I)).LT.I.E-9)COT O 20X T X ( I ) DX( I )CALL FUNC(XT*YlrlrAL*XDERID)DY Y(I)-YI (1 1DXT (-DX(I) DY*XDER)/(I. XDER 2)THE NEXT 3C A R D S P R E V E N T O V E R S H O O T B Y D E C R E A S I N G THE I N C R E M E N T .T H I S G I V E S I N C R E A S E D S T A B I L I T Y A T THE E X P E N C E OF I N C R E A S E DCONVERGENCE T I M E .TT 2.IF(ARS(DX(I)).GT.I.E- )TT A S DX(I))DX(I) DX(I) DXT/(l. TT**5)T ABS(OY) ABS(DX(I))IF(A S(DXT) LTeABS(l E-4*DX(X)) OR T LT l ETO- 820)GO10 C O N T I N U EP R I N T 10051005 F O R M A T ( * S H O R T E S T D I S T A N C E N O T F O U N D * )w 2 ( I ) 0.20 C O N T 1 NU CCC L O S E S T P O I N T FOUND.NOW F I N D D I S T A N C Ex1 ( I ) x ( I ) D x ( I )DIS(I) SQRT(DX(I)**2 DY**2)IF(DIS(I).LT.I.E-B)DTSozl.E-830 C O N T I N U ECCCC O M P L E T E S E T OF C L O S E S T P O I N T S A N D D l S T A N C E S F O U N DNOW F I N D A NEW S E T O F P A R A M E T E R S40I0014445475060

APPENDIX B-ContinuedTHE NEXT 6 CARDS A R E D E R I V E D FROM A D A M P I N G T E C H N I Q U E T H A T I N C R E A S E SS T A B I L I T Y AS M E N T I O N E D I N THE D E S C R I P T I O N .IT W I L L I N C R E A S ECONVERGENCE T I M E T O SOME E X T E N T .CCCTxOoDO 65 I l r M65 T T B ( I )**2UU .S*DDM/TDO 6 7 I l * M67 A ( I * I ) A ( I * I ) o 5 / W WCCCTHE F O L L O W I N G S U B R O U T I N E C A L L S O L V E S T H E L I N E A R S I M U L T A N E O U SE Q U A T I O N G I V E N B Y A X B W I T H M UNKNOWNS.CALL SIMSOL(A*B*M)DO 70 I l * M70 A L I ) A L ( I) E 1 1100 C O N T I N U EI D ( 1 ) 1 OHNOT CONVER1 D ( 2 ) 1OHGED110 C O N T I N U EIF(IPRI"T.GE.l)PRINT1004*ID*RMS*ERR*ITER1004 F O R M A T ( / *SUBROUTINE L S D*2A10RMS *E15*7/*CONVERGENCE C*RITERION *El5 7/*ITERATION COUNT *IJ/)IF(IPRINT.GE.2)PRINT1 0 0 0 X I * Y I X 1 I D I S I 1000 F O R M A T ( * X Y I X D I S W * / ( G I S O ) )RETURNEND/*CCCCCCCCCS U B R O U T I N E F U N C C A L L E D B Y CURVE F I T T I N G S U B R O U T I N ESUBROUTINE F U N C ( X * Y*N*AL*XDER*DER)CCT H I S S U B R O U T I N E IS C A L L E D B Y L ? D AND IS FOR T H E L A N G M U I R PROBECURRENT V S V O L T A G E F U N C T I O N .DIMENSION X(50)rY(5O)rDER(10 5O) AL(lO)I F ( N o N E * I )GO TO 50CIF N n l rC A L C U L A T E T H E F U N C T I O N ANDI T S X-DERIVATIVEAT X ( 1 ) .xl x(l)Y ( 1 ) X I * ( A L ( 2 ) AL ( 1 ) X I ) AL ( 3 ) A L ( 4 )*EXP( A L ( 5 ) * X l 1 2 * A L ( 1 1 *X 1 A L ( 2 1 A L (41 * A L ( 5 ) * E X P ( A L ( 5 )*X 1 )XDERRETUQN50 C O N T I N U E20

APPENDMB - ConcludedCCI F N NOT l r CALCULATE THE F U N C T I O N AND D E R I V A T I V E S WITH R E S P E C TTO A L L PARAMETERS AT P O I N T S X ( I ) * I l r N *DO I n 0 I I*NxI x(I)TI ExP(AL(S)*Xl )T AL( )*TIx2 x 1 *x 1Y ( I ) A L ( 1 ) * X E A L ( 2) * X I AL (3) TDER ( l * I ) X 2DER ( 2 * I ) X 1DER ( 3 * I ) l *DER ( 4 r I ) T lDER ( 5 9 I ) X l * T100 C O N T I N U ERETURNEND21

REFERENCES1. Nielsen, Kaj L.:Methods in Numerical Analysis.2. Scarborough, James B.:Press, 1950.Macmillan Co., c.1956.Numerical Mathematical Analysis.Second ed., John Hopkins3. Reed, Frank C.: A Method of Least Squares Curve Fitting With Error in BothVariables. NAVORD Rep. 3521, US. Navy, June 1955.4. Kendall, Maurice G.; and Stuart, Alan: The Advanced Theory of Statistics. Vol. 2Inference and Relationship. Hafner Pub. Co., Inc., c.1961, p. 409.5. Guest, P. G.:p. 366.-Numerical Methods of Curve Fitting. Cambridge Univ. P r e s s , 1961,6. Levenberg, Kenneth: A Method for the Solution of Certain Non-Linear Problems inLeast Squares, Quart. Appl. Math., vol. 11, no. 2, July 1944, pp. 164-168.22NASA-Langley, 1971- 19L-7675

NATIONALAERONAUTICSAND SPACE ADMINISTRATIONWASHINGTON,D. C. 20546FIRST CLASS MAILOFFICIAL BUSINESSP E N A L T Y F O R P R I V A T E USE 300POSTAGE A N D FEES PAIDNATIONAL AERONAUTICS A%SPACE ADMINISTRATION19 710716 S00903DS003 001 C1 UD E P T nf THE A I R

A LEAST-SQUARE-DISTANCE CURVE-FITTING TECHNIQUE By John Q. Howell Langley Research Center SUMMARY A method is presented for fitting a function with n parameters y f(al,a2, . . .,an;x) to a set of N data points {Gi,yi) in a manner that mini mizes the sum of the squares of the distances from the data points to the curve. A

Related Documents:

behringer ultra-curve pro dsp 24 a/d- d/a dsp ultra-curve pro ultra- curve pro 1.1 behringer ultra-curve pro 24 ad/da 24 dsp ultra-curve pro dsp8024 smd (surface mounted device) iso9000 ultra-curve pro 1.2 ultra-curve pro ultra-curve pro 19 2u 10 ultra-curve pro ultra-curve pro iec . 7 ultra-curve pro dsp8024 .

The process of constructing an approximate curve x which fit best to a given discrete set of points ,xyii in., is called curve fitting Principle of Least Squares: The principle of least squares (PLS) is one of the most popular methods for finding the curve of best fit to a given data set ,nii. Let be the equation of the curve to be fitted to .

For best fitting theory curve (red curve) P(y1,.yN;a) becomes maximum! Use logarithm of product, get a sum and maximize sum: ln 2 ( ; ) 2 1 ln ( ,., ; ) 1 1 2 1 i N N i i i N y f x a P y y a OR minimize χ2with: Principle of least squares!!! Curve fitting - Least squares Principle of least squares!!! (Χ2 minimization)

M.C.Q. on Curve Fitting 6) With the help of correlation coe cient one can study A)Relationship between any two attributes B)Relationship between any two Variables . The principal of least squares state that A)The sum of square of all points from curve is minimum B)The sum of square of all points from curve is

Figure 5 (below) shows a B Trip Curve overlaid onto the chart. The three major components of the Trip Curve are: 1. Thermal Trip Curve. This is the trip curve for the bi-metallic strip, which is designed for slower overcurrents to allow for in rush/startup, as described above. 2. Magnetic Trip Curve. This is the trip curve for the coil or solenoid.

Koch middle-one fifth curve (Fig. 8a) and Koch middle one-sixth curve (Fig. 9a) has dimensions 1.113 and 1.086 respectively that are smaller than the conventional Koch curve. (a) Koch left one-third curve with Generator (b) Koch left one-third Snowflake Fig. 5: Koch Left One-Third Curve and Koch Snowflake for (r1, r2, r3) (1/3,1/3,1/3)

The efficiency curve shows the efficiency of the pump. The efficiency curve is an average curve of all the pump types shown in the chart. The efficiency of pumps with reduced-diameter impellers is approx. 2 % lower than the efficiency curve shown in the chart. Pump type, number of poles and frequency. QH curve

Grade 2 Home Learning Packet The contents of this packet contains 10 days of activities in paper copy. Students should be completing this packet, along with completing lessons on their math/reading online programs daily. If we surpass the 10 days without school, students should continue using their online math and reading programs for 45 minutes per day per program unless otherwise specified .