Total Least Squares Fitting Of Bezier And B-Spline Curves To . - CORE

1y ago
2 Views
1 Downloads
1.09 MB
16 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Philip Renner
Transcription

View metadata, citation and similar papers at core.ac.ukbrought to you byCOREprovided by Calhoun, Institutional Archive of the Naval Postgraduate SchoolCalhoun: The NPS Institutional ArchiveFaculty and Researcher PublicationsFaculty and Researcher Publications2001Total Least Squares Fitting of Bezierand B-Spline Curves to Ordered Data.Computer Aided Geometric DesignBorges, C.F.C.F. Borges and T.A. Pastva, Total Least Squares Fitting of Bezier and B-Spline Curves toOrdered Data. Computer Aided Geometric Designhttp://hdl.handle.net/10945/38066

Computer Aided Geometric Design 19 (2002) 275–289www.elsevier.com/locate/comaidTotal least squares fitting of Bézier andB-spline curves to ordered dataCarlos F. Borges , Tim PastvaNaval Postgraduate School, Code NA/BC, 93943 Monterey, CA, USAReceived 1 October 2000; received in revised form 1 November 2001AbstractWe begin by considering the problem of fitting a single Bézier curve segment to a set of ordered data so thatthe error is minimized in the total least squares sense. We develop an algorithm for applying the Gauss–Newtonmethod to this problem with a direct method for evaluating the Jacobian based on implicitly differentiating apseudo-inverse. We then demonstrate the simple extension of this algorithm to B-spline curves. We present someexperimental results for both cases. 2002 Elsevier Science B.V. All rights reserved.Keywords: Data fitting; Total least-squares; B-splines1. IntroductionA degree n Bézier curve segment is a parametric polynomial curve whose shape is determined by anordered set of control points (xj , yj ) for j 0, 1, 2, . . . , n. In particular, for 0 t 1, the componentsof the curve are given byx(t) n Bjn (t)xjj 0andy(t) n Bjn (t)yj ,j 0where Bjn (t) is the j th Bernstein polynomial of degree n, that is n jnt (1 t)n j .Bj (t) j* Corresponding author.E-mail address: borges@nps.navy.mil (C.F. Borges).0167-8396/02/ – see front matter 2002 Elsevier Science B.V. All rights reserved.PII: S 0 1 6 7 - 8 3 9 6 ( 0 2 ) 0 0 0 8 8 - 2

276C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289The problem we are interested in solving can be stated as follows. Given an ordered set of data pointsmn{(ui , vi )}mi 1 and a non-negative integer n m, find nodes {ti }i 1 and control points {(xj , yj )}j 1 thatminimize 2 2 mnn nnui .(1)Bj (ti )xj vi Bj (ti )yjj 1i 1j 1In short, we are looking for the best total least-squares fit of a Bézier curve segment to a set of ordereddata points in the plane.We will find it much easier to proceed if we recast the problem in a more convenient linear algebrasetting. To this end we introduce the following definition:Definition 1. Given a vector t Rm and a non-negative integer n we define the Bernstein matrix of degreen, which we denote by Bn (t), to be the real m (n 1) matrix whose i, j element is Bjn (ti ). In particular nB0 (t1 ) B1n (t1 ) . . . Bnn (t1 ) B0n (t2 ) B1n (t2 ) . . . Bnn (t2 ) .Bn (t) . . B0n (tm )B1n (tm ). . . Bnn (tm )Be warned that for ease of notation we will generally omit the arguments of matrix functions when noconfusion is possible.This definition allows us to rewrite the objective function in expression (1) as follows 2 ux Bn , (2) v Bn y 2where x, y, u, and v are vectors whose elements are the xi , yi , ui , and vi respectively.It is of particular interest that the variables in this problem are of two distinct types—those that appearlinearly (the control points) and those that appear non-linearly (the nodes). Note that for a fixed t theoptimal associated control points are found by solving a linear least squares problem. Formally we maywrite uxBn,(3) Bn vywhere Bn is a generalized inverse (see (Rao and Mitra, 1971)) of Bn . Specifically, Bn is any (n 1) mmatrix function satisfying the following equationsBn Bn Bn Bn ,(4)(Bn Bn )T(5) Bn Bn .We note that the Moore–Penrose generalized inverse, Bn , is one such matrix although there can beothers.11 This is a formal derivation and it is important to note that one should not form generalized inverses as a method forcomputing solutions to linear least-squares problems.

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289277This structure leads to a useful approach to solving the problem. In particular, one can substitute (3)into the objective function (2) and formally uncouple the variables. It is then possible to solve for theoptimal t independently of the control points by minimizing the variable projection functional 2 uu Bn Bn . (6)r(t) Bn Bnv 2vIt is important to note that r(t) rT r, where the residual vector, r, is also a function of t and is givenby r(t) PB n uPB n v ,(7)where PBn Bn Bn is the orthogonal projector onto the range of B and PB n I PBn is the orthogonalprojector onto the null-space of BnT . Note that both of these are also matrix functions although we havesuppressed their arguments.1.1. Numerical evaluation of r(t)From a computational standpoint, we may reliably evaluate the variable projection functional byexploiting the QR factorization (see (Golub and Van Loan, 1996)) of the Bernstein matrix. In particular,there exists a permutation matrix Π such that R1,1 R1,2,Bn Π [Q1 Q2 ]00where [Q1 Q2 ] is an m m orthogonal matrix and R1,1 is r r, upper-triangular, and invertible wherer rank(Bn ). Note that R1,2 vanishes whenever Bn is full rank. It is easily verified that 1 T R1,1 Q1 (8)Bn Π0satisfies Eqs. (4) and (5), from which it follows directly thatPBn Q1 QT1 ,PB n Q2 QT2 ,whence 2 2 r(t) QT2 u 2 QT2 v 2 .2. Minimizing the variable projection functionalIn this section we will concern ourselves with minimizing the variable projection functional whichamounts to solving a non-linear least-squares problem. Although a variety of methods exist for suchproblems we will explore the Gauss–Newton approach.The Gauss–Newton approach involves using an affine model at the current point tk as followsmk (t) r(tk ) J (tk )(t tk ),

278C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289where the i, j element of the Jacobian matrix isJ (t)i,j ri (t). tjMinimizing mk (t) yieldstk 1 tk J (tk )r(tk ).For a variety of reasons it is prudent to use some form of stepsize control which leads to the followingalgorithmtk 1 tk αk J (tk )r(tk ),where J (tk )r(tk ) gives a descent direction and αk is usually chosen to guarantee that the step does infact decrease the residual.The critical step for us is computing the Jacobian since this involves differentiating an expressioninvolving a generalized inverse of the Bernstein matrix. In particular, the j th column of the Jacobianmatrix is given by P Bn r(t) tj u . PB n tjv tjNotice that the Jacobian has an upper and a lower half, and furthermore, that to evaluate them we needto be able to evaluate PB nx, tjwhere x can be either u or v. It is essential therefore that we are able to differentiate a generalized inverseand we will find the following theorem will quite useful in this regard (see also (Hanson and Lawson,1969) or (Golub and Pereyra, 1973)).Theorem 1. Let A be an m n matrix function and let A be an n m matrix function such thatAA A A and (AA )T AA . Then T PA A A PAA PAA.(9) tk tk tkProof. First we note that by the product rule A PA A PA A PA tk tk tkand since PA A A we have A A PA A PA, tk tk tkwhence A A A PAA PA PA . tk tk tk tk

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289279Multiplying both sides on the right by A and using the fact that PA AA yields A PAPA PA A . tk tkTransposing both sides and invoking the symmetry of PA we also have T PA A PAA.PA tk tkNow, since PA is idempotent PA2 PA PA PA PA PA tk tk tk tk A A T PA A PA A. tk tkFinally, we note that PA I PA and hence PA PA tk tkfrom which the result follows. Applying this to our case yields 1 T 1 T T PB nR1,1 Q1R1,1 Q1 Bn B T Q2 QT2ΠΠ T n Q2 QT2 . tj tj tj00It is clear that Bn tjis all zeros with the exception of its j th row which is nB0 (tj ) B1n (tj ) . . . Bnn (tj ) tjand this structure may be exploited to reduce the total number of operations required to evaluate theJacobian. In particular, we can simultaneously compute all of the columns in the following way. LetBn (t) be the derivative of the Bernstein matrix which can be easily computed with Bn (t) n 0 Bn 1 (t) Bn 1 (t) 0 , where 0 is a single column of zeros. Then the Jacobian is given by Q2 QT2 diag(P u) P T diag(Q2 QT2 u),(10)Q2 QT2 diag(P v) P T diag(Q2 QT2 v)where 1 T R1,1Q1.P Bn Π0 Careful implementation of this formula is important to maximize economies.

280C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–2893. Implementation detailsAs we are fitting a single Bézier segment the problem may be simplified, without loss of generality, byfixing the nodes associated with the first and last data points to occur at the ends of the curve segment, inparticular we set t1 0 and tm 1 respectively. This has the effect of preventing the curve segment fromgrowing excessively outside the region of the data set. Although it does not force the endpoints of thecurve segment to coincide with the first and last points of the data set, it does restrict them to be tangentsto circles centered at these points.Indeed, experience shows that failure to enforce this restriction generally leads to serious problemsas the curve segment tends to expand without bound. With this restriction it is useful to reduce thedimension of the search vector to m 2 by deleting the first and last elements (t1 and tm ). The Jacobianmatrix becomes (2m) (m 2) and may be computed by replacing the first and last rows of Bn withzeros, then evaluating formula (10), and finally deleting the first and last columns from the resultingmatrix.In our implementation we have used the stepsize control parameter αk to enforce the condition thatthe nodal values satisfy 0 ti 1 for all i. If taking a full step would violate this constraint, then we setαk to a value that will preserve it. Having done this, we then use a primitive line search before taking thestep. In particular, we check to see if the current step will in fact decrease the residual. If not, then werepeatedly halve αk until it does.As with any iterative method, a good initial guess can be most valuable. The simplest is the chordlength parameterization. In particular, we let di [ui vi ]T and make our initial guess for the nodal values ij 1 di di 1 2ti mj 1 di di 1 2for i 1, 2, . . . , m with t0 0.This works reasonably well but we have noted better results using an affine invariant norm (see(Nielson, 1987)) in place of the 2-norm in the formula given above. In particular, we use the normdefined byx2V xT V x,where V is the inverse of the data covariance matrix, which is given by 1 m1 Tdi di.V m i 1Another excellent method is the affine invariant angle metric which is developed in the very elegantpaper of Foley and Nielson (1989). It is somewhat more involved in its computation but worked verywell in our experiments.Finally, we use a traditional test for convergence and stop whenrcurrent rlast tol,rlastwhere rcurrent is the squared residual at the current step, rlast is the squared residual at the last step, and tolis an arbitrary tolerance. In order to push the algorithm to the extreme we set tol 10 12 for the examplespresented in the next section.

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–2892814. Bézier curve examplesIn our first example we consider a fitting a data set of 23 points taken from an experiment on a reactingchemical system which may have multiple steady states (see (Marin and Smith, 1994)). The experimentsamples the steady state oxidation rate R achieved by a catalytic system for an input concentration ofcarbon monoxide CCO . The data is in log-log format and we show the results of fitting with segments ofdifferent degree in Figs. 1–6. The sixth-degree fitting (Fig. 6) is excellent and shows the proper orderingof the data from the first data point, d1 at the left to the final data point, d23 at the lower right. Note thateven with this very tight tolerance the convergence is fast in all but the fifth degree fittings (Fig. 5). If tolis reduced to 10 6 then the fifth degree fit converges in only 65 iterations with a final squared residual of0.331 10 2 .Of course, there may be local minima as is evidenced in the fitting of the fourth degree curve. Inthis case the algorithm converges to a local minimum that does not preserve the nodal ordering of thepoints (see Fig. 3). If, however, we use the affine invariant angle metric (see (Foley and Nielson, 1989))to generate our initial guess then we converge to a different fit (Fig. 4) which does preserve the nodalordering and has a squared residual that is about half as large.The second set of fittings are to a Dillner 20-32-C low Reynolds number airfoil (all of the airfoil dataused in this paper can be found at the UIUC Airfoil Data Site which can be accessed via world-wide webat http://amber.aae.uiuc.edu/ m-selig/ads.html). The results of this experiment appear in Figs. 7 and 8.The airfoil is represented by 35 points ordered counterclockwise starting from the top of the trailingedge. This airfoil has a relatively simple geometry and is easily fitted by the algorithm.Fig. 1. Reaction rate data fitted with a third degree Bézier curve. The final squared residual was 0.567 10 1 which took 22iterations. Note that a loop has appeared and the nodal points do not preserve the ordering of the data.

282C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289Fig. 2. Reaction rate data fitted with a fourth degree Bézier curve. The final squared residual was 0.136 10 1 which took 40iterations. Note that the nodal points do not preserve the ordering of the data.Fig. 3. A close-up of the fourth degree fitting. You can see that we have come to a local minimum as the nodal points associatedwith the d10 and d12 are not the closest points on the curve.

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289283Fig. 4. Reaction rate data fitted with a fourth degree Bézier curve using a different starting guess yields a final squared residualof 0.602 10 2 and does preserve the nodal ordering.Fig. 5. Reaction rate data fitted with a fifth degree Bézier curve. The final squared residual was 0.330 10 2 which took 2371iterations. Note the appearance of a cusp.

284C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289Fig. 6. Reaction rate data fitted with a sixth degree Bézier curve. The final squared residual was 0.234 10 2 which took 35iterations.Fig. 7. Dillner 20-32-C low Reynolds number airfoil fitted with a single fifth degree Bézier curve segment. The final squaredresidual was 0.210 10 4 which took 22 iterations.

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289285Fig. 8. Dillner 20-32-C low Reynolds number airfoil fitted with a single sixth degree Bézier curve segment. The final squaredresidual was 0.113 10 4 which took 44 iterations.The final set of fittings are to a NACA/Munk M-27 airfoil. This airfoil is represented by 33 pointsalso ordered counterclockwise starting from the top of the trailing edge. This airfoil has a more involvedgeometry than the first with more changes in concavity. Although the fifth-degree fitting has a smallresidual, it is not a good representation of the airfoil (Fig. 9). We can also see that convergence is relativelyslow. However, when we use a sixth-degree segment (which does have sufficient flexibility to model thechanges in concavity) we get a far more appealing fit (Fig. 10) and much faster convergence.The speed of convergence can be improved by using a more sophisticated line search approach beforetaking each step. In some cases the higher per step costs are offset by substantial reduction in the numberof iterations. We tested this by changing the step strategy to the following: If the Newton step decreasesthe residual, take it as is. If not then perform a line search to find the optimal stepsize in current directionand take that step. For the majority of the experiments presented here, this change in strategy causes littleor no change in speed of convergence because it seldom comes into play (the full Newton step usuallygives a decrease). However, when applied to the fifth degree fit of the reaction rate data it reduces thenumber of iterations from 2371 to 180. For the NACA/Munk M-27 airfoil fifth degree fit, this approachreduces the number of iterations from 689 to 201.5. Extension to B-spline curvesIt is very straightforward to extend preceding algorithm to the more general class of B-spline curves.Given a positive integer n and a non-decreasing knot sequence k0 , k1 , . . . , kL , where L 2n 1 we let

286C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289Fig. 9. NACA/Munk M-27 airfoil fitted with a single fifth degree Bézier curve segment. The final squared residual was0.114 10 2 which took 689 iterations. This is a rather unappealing fit as a cusp has appeared at the leading edge.Fig. 10. NACA/Munk M-27 airfoil fitted with a single sixth degree Bézier curve segment. The final squared residual was0.745 10 6 which took 13 iterations.

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289287Fig. 11. Reaction rate data fitted with a third order B-spline curve with knot sequence 0, 0, 0, 1, 2, 2, 2. The final squared residualwas 0.361 10 1 which took 28 iterations.Fig. 12. Reaction rate data fitted with a third order B-spline curve with knot sequence 0, 0, 0, 1.95, 2, 2, 2. The final squaredresidual was 0.102 10 1 which took 19 iterations.

288C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289Fig. 13. NACA/Munk M-27 airfoil fitted with a third order B-spline curve with knot sequence 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. The finalsquared residual was 0.649 10 3 which took 22 iterations.Fig. 14. NACA/Munk M-27 airfoil fitted with a third order B-spline curve with knot sequence 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 9. Thefinal squared residual was 0.719 10 5 which took 43 iterations (the same result is achieved in 31 iterations using a warmrestart with the nodes from the fitting in Fig. 13).

C.F. Borges, T. Pastva / Computer Aided Geometric Design 19 (2002) 275–289289Nin (t) for i 0, 1, 2, . . . , L n be the associated nth order B-spline basis functions (see (Farin, 1996)).The objective function becomes 2 2 mL nL n Njn (ti )xj vi Njn (ti )yjui .(11)i 1j 0j 0Another matrix function will simplify the notation. SoDefinition 2. Given a vector t Rm , a positive integer n, and a non-decreasing knot sequencek0 , k1 , . . . , kL , where L 2n 1 we define the B-spline matrix, which we denote by N (t), to be thereal m (L n 1) matrix whose i, j element is Njn (ti ). In particular nn(t1 )N0 (t1 ) N1n (t1 ) . . . NL n 1 N n (t ) N n (t ) . . . N n 0 21 2L n 1 (t2 ) . N (t) . .nnnN0 (tm ) N1 (tm ) . . . NL n 1 (tm )The remainder of the derivation is the same as that for the Bézier curve segment (indeed the Béziercurve segment is a special case). One need only replace all occurences of the Bernstein matrix with theB-spline matrix. The greater generality of the B-spline curve is quite useful in fitting as one may usemulti-segment curves. For our experiments we have used the same starting guesses as were used with theBézier curve algorithm except that the values are affinely mapped to the domain of the given B-splinecurve. The extremal data points are tied to the endpoints of the curve for the same reasons as before. Wenote that if a fitting is not acceptable it is reasonable to simply modify the knot sequence and try again.This brings up the possibility of using a warm restart, that is, using the nodal values from the previousfit as starting values for the new fit. Experience shows mixed results when doing this (see the examplein Fig. 14 for an instance of this approach). Although the warm restart does sometimes lead to fasterconvergence, other times it actually leads to slower convergence.ReferencesFarin, G., 1996. Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide, 4th ed. Academic Press, SanDiego.Foley, T., Nielson, G., 1989. A knot selection method for parametric splines, in: Lyche, T., Schumaker, L. (Eds.), MathematicalMethods in Computer Aided Geometric Design. Academic Press, New York, pp. 261–271.Golub, G., Pereyra, V., 1973. The differentiation of pseudo-inverses and nonlinear least-squares problems whose variablesseparate. SIAM J. Numer. Anal. 10, 413–432.Golub, G., Van Loan, C., 1996. Matrix Computations, 3rd ed. The Johns Hopkins University Press, Baltimore.Hanson, R., Lawson, C., 1969. Extensions and applications of the Householder algorithm for solving linear least squaresproblems. Math. Comput. 23, 787–812.Marin, S., Smith, P., 1994. Parametric approximation of data using odr splines. Computer Aided Geometric Design 11, 247–267.Nielson, G., 1987. Coordinate free scattered data interpolation, in: Schumaker, L., Chui, C., Utreras, F. (Eds.), Topics inMultivariate Approximation. Academic Press, New York, pp. 175–184.Rao, C., Mitra, S., 1971. Generalized Inverse of Matrices and it Applications. John Wiley, New York.

C.F. Borges and T.A. Pastva, Total Least Squares Fitting of Bezier and B-Spline Curves to Ordered Data. Computer Aided Geometric Design . In short, we are looking for the best total least-squares fit of a Bézier curve segment to a set of ordered data points in the plane.

Related Documents:

Least Squares Fitting Least Square Fitting A mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the

For best fitting theory curve (red curve) P(y1,.yN;a) becomes maximum! Use logarithm of product, get a sum and maximize sum: ln 2 ( ; ) 2 1 ln ( ,., ; ) 1 1 2 1 i N N i i i N y f x a P y y a OR minimize χ2with: Principle of least squares!!! Curve fitting - Least squares Principle of least squares!!! (Χ2 minimization)

Other documents using least-squares algorithms for tting points with curve or surface structures are avail-able at the website. The document for tting points with a torus is new to the website (as of August 2018). Least-Squares Fitting of Data with Polynomials Least-Squares Fitting of Data with B-Spline Curves

Least Squares 1 Noel Cressie 2 The method of weighted least squares is shown to be an appropriate way of fitting variogram models. The weighting scheme automatically gives most weight to early lags and down- . WEIGHTED LEAST-SQUARES FITTING The variogram (27(h)}, defined in (1), is a function of h that is typically .

Linear Least Squares ! Linear least squares attempts to find a least squares solution for an overdetermined linear system (i.e. a linear system described by an m x n matrix A with more equations than parameters). ! Least squares minimizes the squared Eucliden norm of the residual ! For data fitting on m data points using a linear

I. METHODS OF POLYNOMIAL CURVE-FITTING 1 By Use of Linear Equations By the Formula of Lagrange By Newton's Formula Curve Fitting by Spiine Functions I I. METHOD OF LEAST SQUARES 24 Polynomials of Least Squares Least Squares Polynomial Approximation with Restra i nts III. A METHOD OF SURFACE FITTING 37 Bicubic Spline Functions

The least-squares method is usually credited to Carl Friedrich Gauss (1795),[2] but it was first published by Adrien-Marie Legendre (1805).[3] History Context The method Problem statement Limitations Solving the least squares problem Linear least squares The result of fitting a set of data points with a quadratic function Conic fitting a set of .

the errors S is minimum. This is known as the least Square method /Criterion or the principle of least squares. Note: Least squares curves fitting are of two types such as linear and nonlinear least squares fitting to given data x i, y i ,i 1,2,! ! ,n according to the choice of approximating curves f(x) as linear or nonlinear. The