Least Squares Fitting Of Data By Linear Or Quadratic Structures

1y ago
8 Views
2 Downloads
820.91 KB
55 Pages
Last View : 30d ago
Last Download : 3m ago
Upload by : Victor Nelms
Transcription

Least Squares Fitting of Data by Linear or QuadraticStructuresDavid Eberly, Geometric Tools, Redmond WA 98052https://www.geometrictools.com/This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copyof this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons,PO Box 1866, Mountain View, CA 94042, USA.Created: July 15, 1999Last Modified: September 7, 2021Contents1 Introduction32 The General Formulation for Nonlinear Least-Squares Fitting33 Affine Fitting of Points Using Height Fields43.13.23.3Fitting by a Line in 2 Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43.1.1Pseudocode for Fitting by a Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Fitting by a Plane in 3 Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63.2.1Pseudocode for Fitting by a Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7Fitting by a Hyperplane in n 1 Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . .83.3.19Pseudocode for Fitting a Hyperplane . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Affine Fitting of Points Using Orthogonal Regression4.14.24.3Fitting by a Line [1 Dimension] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104.1.1Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11Fitting by a Hyperplane [(n 1) Dimensions] . . . . . . . . . . . . . . . . . . . . . . . . . . .114.2.1Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12Fitting by a Flat [k Dimensions] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134.3.114Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Fitting a Hypersphere to Points5.11015Fitting Using Differences of Lengths and Radius . . . . . . . . . . . . . . . . . . . . . . . . .116

5.1.15.25.3Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fitting Using Differences of Squared Lengths and Squared Radius16. . . . . . . . . . . . . . .185.2.1Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195.2.2Pseudocode for Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205.2.3Pseudocode for Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21Fitting the Coefficients of a Quadratic Equation . . . . . . . . . . . . . . . . . . . . . . . . .225.3.1Pseudocode for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235.3.2Pseudocode for Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245.3.3Pseudocode for Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Fitting a Hyperellipsoid to Points276.1Updating the Estimate of the Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286.2Updating the Estimate of the Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286.3Pseudocode for the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297 Fitting a Cylinder to 3D Points307.1Representation of a Cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307.2The Least-Squares Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317.3An Equation for the Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327.4An Equation for the Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327.5An Equation for the Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347.6Fitting for a Specified Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367.7Pseudocode and Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367.8Fitting a Cylinder to a Triangle Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438 Fitting a Cone to 3D Points8.145Initial Choice for the Parameters of the Error Function . . . . . . . . . . . . . . . . . . . . . .468.1.1Simple Attempt to Reconstruct Height Extremes . . . . . . . . . . . . . . . . . . . . .498.1.2Attempt to Reconstruct the Cone Axis Direction . . . . . . . . . . . . . . . . . . . . .518.1.3Attempt to Reconstruct the Cone Vertex . . . . . . . . . . . . . . . . . . . . . . . . .519 Fitting a Paraboloid to 3D Points of the Form (x, y, f (x, y))255

1IntroductionThis document describes least-squares minimization algorithms for fitting point sets by linear structuresor quadratic structures. The organization is somewhat different from that of the previous version of thedocument. Modifications include the following. A section on the general formulation for nonlinear least-squares fitting is now available. The standard approach is to estimate parameters using numerical minimizers (Gauss–Newton or Levenberg–Marquardt). A new algorithm for fitting points by a circle, sphere or hypersphere is provided. The algorithm isnon-iterative, so the computation time is bounded and small. In the previous version, the sections about fitting of points by ellipses or ellipsoids were severely lackingdetails and not useful for developing algorithms. Several algorithms are now provided for such fitting,including a general approach for fitting points by hyperellipsoids. The document for fitting points by a cylinder has been moved to this document. The website hyperlinkto the cylinder document has been redirected to this document. A section has been added for fitting points by a single-sided cone. Pseudocode is now provided for each of the algorithms. Hyperlinks still exist for those algorithmsimplemented in the GTE source code.Other documents using least-squares algorithms for fitting points with curve or surface structures are available at the website. The document for fitting points with a torus is new to the website (as of August2018). Least-Squares Fitting of Data with Polynomials Least-Squares Fitting of Data with B-Spline Curves Least-Squares Reduction of B-Spline Curves Fitting 3D Data with a Helix Least-Squares Fitting of Data with B-Spline Surfaces Fitting 3D Data with a TorusThe document Least-Squares Fitting of Segments by Line or Plane describes a least-squares algorithm wherethe input is a set of line segments rather than a set of points. The output is a line (segments in n dimensions)or a plane (segments in 3 dimensions) or a hyperplane (segments in n dimensions).2The General Formulation for Nonlinear Least-Squares FittingLet F (p) (F0 (p), F1 (p), . . . , Fn 1 (p)) be a vector-valued function of the parameters p (p0 , p1 , ., pm 1 ).The nonlinear least-squares problem is to minimize the real-valued error function E(p) F (p) 2 .3

Let J dF /dp [dFr /dpc ] denote the Jacobian matrix, which is the matrix of first-order partial derivativesof the components of F . The matrix has n rows and m columns, and the indexing (r, c) refers to row r andcolumn c. A first-order approximation is.F (p d) F (p) J(p)d(1)where d is an m 1 vector with small length. Consequently, an approximation to the error function isE(p d) F (p d) 2 F (p) J(p)d 2(2)The goal is to choose d to minimize F (p) J(p)d 2 and, hopefully, with E(p d) E(p). Choosing aninitial p0 , the hope is that the algorithm generates a sequence pi for which E(pi 1 ) E(pi ) and, in thelimit, E(pj ) approaches the global minimum of E. The algorithm is referred to as Gauss–Newton iteration.For a single Gauss–Newton iteration, we need to choose d to minimize F (p) J(p)d 2 where p is fixed.This is a linear least-squares problem which can be formulated using the normal equationsJ T (p)J(p)d J T (p)F (p)(3)TThe matrix J J is positive semidefinite. If it is invertible, thend (J T (p)J(p)) 1 F (p)(4)If it is not invertible, some other algorithm must be used to choose d; one option is to use gradient descentfor the step. A Cholesky decomposition can be used to solve the linear system.During Gauss–Newton iteration, if E does not decrease for a step of the algorithm, one can modify thealgorithm to Levenberg–Marquardt iteration. The idea is to smooth the linear system to J T (p)J(p) λI d J T (p)F (p)(5)where I is the identity matrix of appropriate size and λ 0 is the smoothing factor. The strategy forchoosing the initial λ and how to adjust it as you compute iterations depends on the problem at hand.For a more detailed discussion, see Gauss–Newton algorithm and Levenberg–Marquardt algorithm. Implementations of the Cholesky decomposition, Gauss–Newton method and Levenberg–Marquardt method inGTE can be found in CholeskyDecomposition.h, GaussNewtonMinimizer.h and LevenbergMarquardtMinimizer.h.3Affine Fitting of Points Using Height FieldsnWe have a set of measurements {(X i , hi )}mi 1 for which X i R are sampled independent variables andhi R is a sampled dependent variable. The hypothesis is that h is related to X via an affine transformationh A · X b, where A is an n 1 vector of constants and b is a scalar constant. The goal is to estimateA and b from the samples. The choice of name h stresses that the measurement errors are in the directionof height above the plane containing the X measurements.3.1Fitting by a Line in 2 DimensionsThe measurements are {(xi , hi }mi 1 where x is an independent variable and h is a dependent variable. Theaffine transformation we want to estimate is h ax b, where a and b are scalars. This defines a line that4

best fits the samples in the sense that the sum of the squared errors between the hi and the line valuesaxi b is minimized. Note that the error is measured only in the h-direction.Define the error function for the least-squares minimization to beE(a, b) mX[(axi b) hi ]2(6)i 1This function is nonnegative and its graph is a paraboloid whose vertex occurs when the gradient satisfies E(a, b) ( E/ a, E/ b) (0, 0). This leads to a system of two linear equations in a and b which canbe easily solved. Precisely,Pm0 E/ a 2 i 1 [(axi b) hi ]xi(7)Pm0 E/ b 2 i 1 [(axi b) hi ]and so Pmx2 Pi 1 imi 1 xi P mxi hii 1 xi a Pi 1Pmm1bhii 1i 1Pm (8)The system is solved by standard numerical algorithms. If implemented directly, this formulationPmcan leadto an irstcomputetheaveragesx̄ (i 1 xi )/mPmand h̄ ( i 1 hi )/m and subtract them from the data. The fitted line is of the form h h̄ ā(x x̄) b̄.The linear system of equations that determines the coefficients is P P mm2(x x̄)0ā(x x̄)(h h̄)ii 1 ii 1 i (9)0mb̄0and has solutionā Pm(x x̄)(hi h̄)i 1Pm i, b̄ 02i 1 (xi x̄)(10)In terms of the original inputs, a ā and b h̄ āx̄.3.1.1Pseudocode for Fitting by a LineListing 1 contains pseudocode for fitting a height line to points in 2 dimensions.Listing 1. Pseudocode for fitting a height line to points in 2 dimensions. The number of input pointsmust be at least 2. The returned Boolean value is true as long as the numerator of equation (10) is positive;that is, when the points are not all the same point. An implementation in a slightly more general frameworkis ApprHeightLine2.h.b o o l F i t H e i g h t L i n e ( i n t numPoints , V e c t o r 2 p o i n t s [ ] ,R e a l& barX , R e a l& barH , R e a l& barA ){// Compute t h e mean o f t h e p o i n t s .V e c t o r 2 mean { 0 , 0 } ;f o r ( i n t i 0 ; i nu mPo int s ; i ){mean p o i n t s [ i ] ;5

}mean / num Po int s ;// Compute t h e l i n e a r s y s t e m m a t r i x and v e c t o r e l e m e n t s .R e a l xxSum 0 , xhSum 0 ;f o r ( i n t i 0 ; i nu mPo int s ; i ){V e c t o r 2 d i f f p o i n t s [ i ] mean ;xxSum d i f f [ 0 ] * d i f f [ 0 ] ;l i n e a r d i f f [ 0 ] * d i f f [ 1 ] ;}// S o l v e t h e l i n e a r s y s t e m .i f ( xxSum 0 ){// Compute t h e f i t t e d l i n e h ( x ) barH barA * ( x barX ) .barX mean [ 0 ] ;barH mean [ 1 ] ;barA l i n e a r / xxSum ;return true ;}else{// The o u t p u t i s i n v a l i d . The p o i n t s a r e a l l t h e same .barX 0 ;barH 0 ;barA 0 ;return false ;}}3.2Fitting by a Plane in 3 DimensionsThe measurements are {(xi , yi , h)}mi 1 where x and y are independent variables and h is a dependent variable.The affine transformation we want to estimate is h a0 x a1 y b, where a0 , a1 and b are scalars. Thisdefines a plane that best fits the samples in the sense that the sum of the squared errors between the hi andthe plane values a0 xi a1 yi b is minimized. Note that the error is measured only in the h-direction.Define the error function for the least-squares minimization to beE(a0 , a1 , b) mX[(a0 xi a1 yi b) hi ]2(11)i 1This function is nonnegative and its graph is a hyperparaboloid whose vertex occurs when the gradient satisfies E(a0 , a1 , b) ( E/ a0 , E/ a1 , E/ b) (0, 0, 0). This leads to a system of three linear equationsin a0 , a1 and b which can be easily solved. Precisely,0 E/ a0 2Pmi 1 [(Axi Byi C) zi ]xiPm(12)0 E/ a1 2 i 1 [(Axi Byi C) zi ]yiPm0 E/ b 2 i 1 [(Axi Byi C) zi ]and so Pm2i 1 xi Pm xy Pi 1 i imi 1 xiPmi 1 xi yiPm 2i 1 yiPmi 1 yiPmi 1 xiPmi 1 yiPmi 1 16 Pmi 1 xi hi Pm a1 yh Pi 1 i imbi 1 hia0 (13)

The solution is solved by standard numerical algorithms. If implemented directly, this formulationPm can leadto an Pill-conditioned linear sx̄ (i 1 xi )/m,Pmmȳ ( i 1 yi )/m and h̄ ( i 1 hi )/m and subtract them from the data. The fitted plane is of the formh h̄ ā0 (x x̄) ā1 (y ȳ) b̄. The linear system of equations that determines the coefficients is 00 01 0 01 1100 ā0 0 ā1 b̄m PmPm2i 1 (xi x̄)i 1 (xi x̄)(yi ȳ)Pm2i 1 (yi ȳ) Pm i 1 (xi x̄)(yi ȳ) 000 ā0 0 ā1 b̄m (14) and has solutionā0 Pmi 1 (hi h̄)(xi x̄) Pm i 1 (hi h̄)(yi ȳ) 0 r 0 r1 0 00 r1 01 r0 11 r0 01 r1, ā1 , b̄ 0 00 11 201 00 11 201(15)In terms of the original inputs, a0 ā0 , a1 ā1 and b h̄ ā0 x̄ ā1 ȳ.3.2.1Pseudocode for Fitting by a PlaneListing 2 contains pseudocode for fitting a height plane to points in 3 dimensions.Listing 2. Pseudocode for fitting a height plane to points in 3 dimensions. The number of input pointsmust be at least 3. The returned Boolean value is true as long as the matrix of the linear system has nonzerodeterminant. An implementation in a slightly more general framework is ApprHeightPlane3.h.b o o l F i t H e i g h t P l a n e ( i n t numPoints , V e c t o r 3 p o i n t s [ ] ,R e a l& barX , R e a l& barY , R e a l& barH , R e a l& barA0 , R e a l& barA1 ){// Compute t h e mean o f t h e p o i n t s .V e c t o r 3 mean { 0 , 0 , 0 } ;f o r ( i n t i 0 ; i nu mPo int s ; i ){mean p o i n t s [ i ] ;}mean / num Poi nt s ;// Compute t h e l i n e a r s y s t e m m a t r i x and v e c t o r e l e m e n t s .R e a l xxSum 0 , xySum 0 , xhSum 0 , yySum 0 , yhSum 0 ;f o r ( i n t i 0 ; i nu mPo int s ; i ){V e c t o r 3 d i f f p o i n t s [ i ] mean ;xxSum d i f f [ 0 ] * d i f f [ 0 ] ;xySum d i f f [ 0 ] * d i f f [ 1 ] ;xhSum d i f f [ 0 ] * d i f f [ 2 ] ;yySum d i f f [ 1 ] * d i f f [ 1 ] ;yhSum d i f f [ 1 ] * d i f f [ 2 ] ;}// S o l v e t h e l i n e a r s y s t e m .R e a l d e t xxSum * yySum xySum * xySum ;i f ( d e t ! 0 ){// Compute t h e f i t t e d p l a n e h ( x , y ) barH barA0 * ( x barX ) barA1 * ( y barY ) .7

barX mean [ 0 ] ;barY mean [ 1 ] ;barH mean [ 2 ] ;barA0 ( yySum * xhSum xySum * yhSum ) / d e t ;barA1 ( xxSum * yhSum xySum * xhSum ) / d e t ;return true ;}else{// The o u t p u t i sbarX 0 ;barY 0 ;barH 0 ;barA0 0 ;barA1 0 ;return false ;invalid .The p o i n t s a r ea l l t h e same o r t h e y a r e c o l l i n e a r .}}3.3Fitting by a Hyperplane in n 1 DimensionsThe measurements are {(X i , hi )}mi 1 where the n components of X are independent variables and h is adependent variable. The affine transformation we want to estimate is h A · X b, where A is an n 1vector of constants and b is a scalar constant. This defines a hyperplane that best fits the samples in thesense that the sum of the squared errors between the hi and the hyperplane values A · X i b is minimized.Note that the error is measured only in the h-direction.Define the error function for the least-squares minimization to beE(A, b) mX[(A · X i b) hi ]2(16)i 1This function is nonnegative and its graph is a hyperparaboloid whose vertex occurs when the gradientsatisfies E(A, b) ( E/ A, E/ b) (0, 0). This leads to a system of n 1 linear equations in A and bwhich can be easily solved. Precisely,Pm0 E/ A 2 i 1 [(A · X i b) hi ]X iPm0 E/ b 2 i 1 [(A · X i b) hi ]and so PmTi 1 X i X i PmTi 1 X i PmhXXAiii 1i 1 i PPmmh1bii 1i 1(17)Pm(18)The solution is solved by standard numerical algorithms. If implemented directly, this formulationPm can leadto an irstcomputetheaveragesX̄ (i 1 X i )/mPmand h̄ ( i 1 hi )/m and subtract them from the data. The fitted hyperplane is of the form h h̄ Ā · (X X̄) b̄. The linear system of equations that determines the coefficients is P P T mm(h h̄)X X̄X X̄X X̄0Āiiiii 1i 1 (19)T0b̄0m8

and has solutionĀ mX(X i X̄)(X i X̄)T! 1i 1!mX(hi h̄)(X i X̄) , b̄ 0(20)i 1In terms of the original inputs, A Ā and b h̄ Ā · X̄.3.3.1Pseudocode for Fitting a HyperplaneListing 3 contains pseudocode for fitting a height hyperplane to points in n 1 dimensions.Listing 3. Pseudocode for fitting a height hyperplane to points in n 1 dimensions. The number of inputpoints must be at least n. The returned Boolean value is true as long as the matrix of the linear system hasnonzero determinant.b o o l F i t H e i g h t H y p e r p l a n e ( i n t numPoints , V e c t o r n 1 p o i n t s [ ] ,V e c t o r n & barX , R e a l& barH , V e c t o r n & barA ){// Compute t h e mean o f t h e p o i n t s .V e c t o r n 1 mean V e c t o r n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){mean p o i n t s [ i ] ;}mean / nu mPo int s ;// Compute t h e l i n e a r s y s t e m m a t r i x and v e c t o r e l e m e n t s . The f u n c t i o n// V e c t o r n Head n ( V e c t o r n 1 V) r e t u r n s (V [ 0 ] , . . . , V [ n 1 ] ) .M a t r i x n , n L M a t r i x n , n : :ZERO ;V e c t o r n R V e c t o r n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){V e c t o r n 1 d i f f p o i n t s [ i ] mean ;V e c t o r n XminusBarX Head n ( d i f f ) ;R e a l HminusBarH d i f f [ n ] ;L O u t e r P r o d u c t ( XminusBarX , XminusBarX ) ;// (X [ i ] barX [ i ] ) * ( X [ i ] barX [ i ] ) ˆ TR HminusBarH * XminusBarX ;}// S o l v e t h e l i n e a r s y s t e m .Real det Determinant (L ) ;i f ( d e t ! 0 ){// Compute t h e f i t t e d p l a n e h (X) barH Dot ( barA , X barX ) .barX Head n (mean ) ;barH mean [ n ] ;barA S o l v e L i n e a r S y s t e m ( L , R ) ;// s o l v e L *A Rreturn true ;}else{// The o u t p u t i s i n v a l i d . The p o i n t s a p p e a r n o t t o l i v e on a// h y p e r p l a n e ; t h e y m i g h t l i v e i n an a f f i n e s u b s p a c e o f d i m e n s i o n// s m a l l e r t h a n n .barX V e c t o r n : :ZERO ;barH 0 ;barA V e c t o r n : :ZERO ;return false ;}}9

4Affine Fitting of Points Using Orthogonal RegressionnWe have a set of measurements {X i }mi 1 for which X i R are sampled independent variables. Thehypothesis is that the points are sampled from a k-dimensional affine subspace in n-dimensional space.Such a space is referred to as a k-dimensional flat. The classic cases include fitting a line to points in ndimensions and fitting a plane to points in 3 dimensions. The latter is a special case of fitting a hyperplane,an (n 1)-dimensional flat, to points in n dimensions.In the height-field fitting algorithms, the least-squares errors were measured in a specified direction (theheight direction). An alternative is to measure the errors in the perpendicular direction to the purportedaffine subspace. This approach is referred to as orthogonal regression.4.1Fitting by a Line [1 Dimension]The algorithm may be applied to sample points {X i }mi 1 in any dimension n. Let the line have origin Aand unit-length direction D, both n 1 vectors. Define Y i X i A, which can be written as Y i (D · Y i ) D D i where D i is the perpendicular vector from X i to its projection on the line. The squared 22length of thisPmvector is2 D i Y i di D . The error function for the least-squares minimization isE(A, D) i 1 D i . Two alternate forms for this function areE(A, D) m X TYTI DDYii(21)i 1andE(A, D) DTm X(Y i · Y i )I Y i YTi !D DT M D(22)i 1where M is a positive semidefinite symmetric matrix that depends on A and the Y i but not in D.Compute the derivative of equation (21) with respect to A to obtainmhiX E 2 I DD TYi Ai 1(23)PmAt a minimumPmvalue of E, it is necessary that this derivative is zero, which it is when i 1 Y i 0, implyingA (1/m) i 1 X i , the average of the sample points. In fact there are infinitely many solutions, A sD,for any scalar s. This is simply a statement that A is a point on the best-fit line, but any other point on theline may serve as the origin for that line.Equation (22) is a quadratic form D T M D whose minimum is the smallest eigenvalue of M , computed usingstandard eigensystem solvers. A corresponding unit length eigenvectorPmD completes our constructionPm ofT theleast-squares line. The covariance matrix of the input points is C i 1 Y i Y Ti . Defining δ i 1 Y i Y i ,we see that M δI C, where I is the identity matrix. Therefore, M and C have the same eigenspaces.The eigenspace corresponding to the minimum eigenvalue of M is the same as the eigenspace correspondingto the maximum eigenvalue of C. In an implementation, it is sufficient to process C and avoid the additionalcost to compute M .10

4.1.1Pseudocode for the General CaseListing 4 contains pseudocode for fitting a line to points in n dimensions with n 2.Listing 4. Pseudocode for fitting a line to points in n dimensions using orthogonal regression. The numberof input points must be at least 2. The returned Boolean value is true as long as the covariance matrix ofthe linear system has a 1-dimensional eigenspace for the maximum eigenvalue of the covariance matrix.b o o l F i t O r t h o g o n a l L i n e ( i n t numPoints , V e c t o r n p o i n t s [ ] ,V e c t o r n & o r i g i n , V e c t o r n & d i r e c t i o n ){// Compute t h e mean o f t h e p o i n t s .V e c t o r n mean V e c t o r n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){mean p o i n t s [ i ] ;}mean / nu mPo int s ;// Compute t h e c o v a r i a n c e m a t r i x o f t h e p o i n t s .M a t r i x n , n C M a t r i x n , n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){V e c t o r n d i f f p o i n t s [ i ] mean ;C O u t e r P r o d u c t ( d i f f , d i f f ) ;// d i f f * d i f f ˆT}// Compute t h e e i g e n v a l u e s and e i g e n v e c t o r s o f C , w h e r e t h e e i g e n v a l u e s a r e s o r t e d// i n n o n d e c r e a s i n g o r d e r ( e i g e n v a l u e s [ 0 ] e i g e n v a l u e s [ 1 ] . . . ) .Real e i g e n v a l u e s [ n ] ;V e c t o r n e i g e n v e c t o r s [ n ] ;S o l v e E i g e n s y s t e m (C , e i g e n v a l u e s , e i g e n v e c t o r s ) ;// S e t t h e o u t p u t i n f o r m a t i o n .o r i g i n mean ;d i r e c t i o n e i g e n v e c t o r s [ n 1];// The f i t t e d l i n e i s u n i q u e when t h e maximum e i g e n v a l u e h a s m u l t i p l i c i t y 1 .r e t u r n e i g e n v a l u e s [ n 2] e i g e n v a l u e s [ n 1 ] ;}Specializations for 2 and 3 dimensions are simple, computing only the upper-triangular elements of C andpassing them to specialized eigensolvers for 2 and 3 dimensions. Implementations are ApprOrthogonalLine2.hand ApprOrthogonalLine3.h.4.2Fitting by a Hyperplane [(n 1) Dimensions]The algorithm may be applied to sample points {X i }mi 1 in any dimension n. Let the hyperplane be definedimplicitly by N · (X A) 0, where N is a unit-length normal to the hyperplane and A is a point on the hyperplane. Define Y i X i A, which can be written as Y i (N · Y i )N N i where N i is avector that is perpendicular to N . The squared length of the projection of Y i onto the normalPm line for thehyperplane is (N · Y i )2 . The error function for the least-squares minimization is E(A, N ) i 1 (N · Y i )2 .Two alternate forms for this function areE(A, N ) m X TYTYii NNi 111(24)

andE(A, N ) NTmX!Y iYTiN N T CN(25)i 1where C Pmi 1Y iY Ti is the covariance matrix of the Y i .Compute the derivative of equation (24) with respect to A to obtainm X E 2 NNTYi Ai 1(26)PmAt a minimumPmvalue of E, it is necessary that this derivative is zero, which it is when i 1 Y i 0, implyingA (1/m) i 1 X i , the average of the sample points. In fact there are infinitely many solutions, A W ,where W is any vector perpendicular to N . This is simply a statement that the average is on the best-fithyperplane, but any other point on the hyperplane may serve as the origin for that hyperplane.Equation (25) is a quadratic form N T CN whose minimum is the smallest eigenvalue of C, computed usingstandard eigensystem solvers. A corresponding unit-length eigenvector N completes our construction of theleast-squares hyperplane.4.2.1Pseudocode for the General CaseListing 5 contains pseudocode for fitting a hyperplane to points in n dimensions with n 3.Listing 5. Pseudocode for fitting a line to points in n dimensions using orthogonal regression. The numberof input points must be at least n. The returned Boolean value is true as long as the covariance matrix ofthe linear system has a 1-dimensional eigenspace for the minimum eigenvalue of the covariance matrix.b o o l F i t O r t h o g o n a l H y p e r p l a n e ( i n t numPoints , V e c t o r n p o i n t s [ ] ,V e c t o r n & o r i g i n , V e c t o r n & n o r m a l ){// Compute t h e mean o f t h e p o i n t s .V e c t o r n mean V e c t o r n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){mean p o i n t s [ i ] ;}mean / nu mPo int s ;// Compute t h e c o v a r i a n c e m a t r i x o f t h e p o i n t s .M a t r i x n , n C M a t r i x n , n : :ZERO ;f o r ( i n t i 0 ; i nu mPo int s ; i ){V e c t o r n d i f f p o i n t s [ i ] mean ;C O u t e r P r o d u c t ( d i f f , d i f f ) ;// d i f f * d i f f ˆT}// Compute t h e e i g e n v a l u e s and e i g e n v e c t o r s o f M, w h e r e t h e e i g e n v a l u e s a r e s o r t e d// i n n o n d e c r e a s i n g o r d e r ( e i g e n v a l u e s [ 0 ] e i g e n v a l u e s [ 1 ] . . . ) .Real e i g e n v a l u e s [ n ] ;V e c t o r n e i g e n v e c t o r s [ n ] ;S o l v e E i g e n s y s t e m (C , e i g e n v a l u e s , e i g e n v e c t o r s ) ;// S e t t h e o u t p u t i n f o r m a t i o n .o r i g i n mean ;normal e i g e n v e c t o r s [ 0 ] ;// The f i t t e d h y p e r p l a n e i s u n i q u e when t h e minimum e i g e n v a l u e h a s m u l t i p l i c i t y 1 .return eigenvalues [0] eigenvalues [ 1 ] ;}12

A specialization for 3 dimensions is simple, computing only the upper-triangular elements of C and passingthem to a specialized eigensolver for 3 dimensions. An implementations is ApprOrthogonalPlane3.h.4.3Fitting by a Flat [k Dimensions]Orthogonal regression to fit n-dimensional points by a line or by a hyperplane can be generalized to fittingby an affine subspace called a k-dimensional flat, where you may choose k such that 1 k n 1. A lineis a 1-flat and a hyperplane is an (n 1)-flat.For dimension n 3, we fit with flats that are either lines (k 1) or planes (k 2). For dimensions n 4and flat dimensions 1 k n 1, the generalization of orthogonal regression is the following. The basesmentioned here are for the linear portion of the affine subspace; that is, the basis vectors are relative to anorigin at a point A. The flat has an orthonormal basis {F j }kj 1 and the orthogonal complement has annorthonormal basis {P j }n kj 1 . The union of the two bases is an orthonormal basis for R . Any input pointX i , 1 i m, can be represented by n kkn kkXXXXpij P j (27)fij F j pij P j A fij F j Xi A j 1j 1j 1j 1The left-parenthesized term is the portion of X i that lives in the flat and the right-parenthesized is theportion that is the deviation of X i from the flat. The least-squares problem is about choosing the two basesso that the sum of squared lengths of the deviations is as small as possible.Define Y i X i A. The basis coefficients are fij F j · Y i and pij P j · Y i . The squared length ofPn kthe deviation is j 1 p2ij . The error function for the least-squares minimization is the sum of the squaredPm Pn klengths for all inputs, E i 1 j 1 p2ij . Two alternate forms for this function are n kmXX Y i (28)P jP TE(A, P 1 , . . . P n k ) YTjii 1j 1andE(A, P 1 , . . . P n k ) n kXj 1where C Pi 1PTjmX!Y iYTiPj i 1n kXPTj CP j(29)j 1Y iY Ti.Compute the derivative of equation (28) with respect to A to obtain n kmXX E 2 P jP TYij Aj 1i 1(30)PmAt a minimumPmvalue of E, it is necessary that this derivative is zero, which it is when i 1 Y i 0, implyingA (1/m) i 1 X i , the average of the sample point

Other documents using least-squares algorithms for tting points with curve or surface structures are avail-able at the website. The document for tting points with a torus is new to the website (as of August 2018). Least-Squares Fitting of Data with Polynomials Least-Squares Fitting of Data with B-Spline Curves

Related Documents:

Least Squares Fitting Least Square Fitting A mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the

Linear Least Squares ! Linear least squares attempts to find a least squares solution for an overdetermined linear system (i.e. a linear system described by an m x n matrix A with more equations than parameters). ! Least squares minimizes the squared Eucliden norm of the residual ! For data fitting on m data points using a linear

For best fitting theory curve (red curve) P(y1,.yN;a) becomes maximum! Use logarithm of product, get a sum and maximize sum: ln 2 ( ; ) 2 1 ln ( ,., ; ) 1 1 2 1 i N N i i i N y f x a P y y a OR minimize χ2with: Principle of least squares!!! Curve fitting - Least squares Principle of least squares!!! (Χ2 minimization)

Least Squares 1 Noel Cressie 2 The method of weighted least squares is shown to be an appropriate way of fitting variogram models. The weighting scheme automatically gives most weight to early lags and down- . WEIGHTED LEAST-SQUARES FITTING The variogram (27(h)}, defined in (1), is a function of h that is typically .

I. METHODS OF POLYNOMIAL CURVE-FITTING 1 By Use of Linear Equations By the Formula of Lagrange By Newton's Formula Curve Fitting by Spiine Functions I I. METHOD OF LEAST SQUARES 24 Polynomials of Least Squares Least Squares Polynomial Approximation with Restra i nts III. A METHOD OF SURFACE FITTING 37 Bicubic Spline Functions

The least-squares method is usually credited to Carl Friedrich Gauss (1795),[2] but it was first published by Adrien-Marie Legendre (1805).[3] History Context The method Problem statement Limitations Solving the least squares problem Linear least squares The result of fitting a set of data points with a quadratic function Conic fitting a set of .

the errors S is minimum. This is known as the least Square method /Criterion or the principle of least squares. Note: Least squares curves fitting are of two types such as linear and nonlinear least squares fitting to given data x i, y i ,i 1,2,! ! ,n according to the choice of approximating curves f(x) as linear or nonlinear. The

OSCE - Anatomy Base of skull What are the structures passing through cribriform plate, optic canal and supra orbital fissure? Where is the optic canal? Eye Describe anatomy of the bony orbit (roof, floor, medial and lateral wall). Describe the course of optic nerve and what is the relationship of optic nerve to carotid artery? Which fibres of optic nerve decussate? If there is bitemporal .