On The Comparison Of Gauge Freedom Handling In Optimization-based . - UZH

1y ago
4 Views
1 Downloads
1.01 MB
8 Pages
Last View : 1m ago
Last Download : 2m ago
Upload by : Casen Newsome
Transcription

IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 20181On the Comparison of Gauge Freedom Handling inOptimization-based Visual-Inertial State EstimationZichao Zhang, Guillermo Gallego, Davide ScaramuzzaIndex Terms—Sensor Fusion, SLAM, Optimization and Optimal ControlI. I NTRODUCTIONVISUAL-INERTIAL (VI) sensor fusion is an active research field in robotics. Cameras and inertial sensorsare complementary [1], and a combination of both providesreliable and accurate state estimation. While the majority ofthe research on VI fusion focuses on filter-based methods[2], [3], [4], nonlinear optimization has become increasinglypopular within the last few years. Compared with filter-basedmethods, nonlinear optimization based methods suffer lessfrom the accumulation of linearization errors. Their maindrawback, high computational cost, has been mitigated bythe advance of both hardware and theory [5], [6]. Recentwork [5], [7], [8], [9] has shown impressive real-time VI stateestimation results in challenging environments using nonlinearoptimization.Although these works share the same underlying principle,i.e., solving the state estimation as a nonlinear least squaresoptimization problem, they use different methods to handleManuscript received: February, 24, 2018; revised April, 16, 2018; acceptedApril, 16, 2018.This paper was recommended for publication by Editor Francois Chaumetteupon evaluation of the Associate Editor and Reviewers’ comments. Thiswork was supported by the DARPA FLA program, the Swiss NationalCenter of Competence Research Robotics, through the Swiss National ScienceFoundation, and by the SNSF-ERC starting grant.The authors are with the Robotics and Perception Group, Dept. of Informatics, University of Zurich, and Dept. of Neuroinformatics, University ofZurich and ETH Zurich, Switzerland—http://rpg.ifi.uzh.ch.Digital Object Identifier (DOI): see top of this page.Free2.42.22.22.02.01.81.81.61.61.41.41.2 8 6x (m)(a) Free gauge approach 4Transformed freeFixed2.4y (m)y (m)Abstract—It is well known that visual-inertial state estimationis possible up to a four degrees-of-freedom (DoF) transformation(rotation around gravity and translation), and the extra DoFs(“gauge freedom”) have to be handled properly. While differentapproaches for handling the gauge freedom have been used inpractice, no previous study has been carried out to systematicallyanalyze their differences. In this paper, we present the firstcomparative analysis of different methods for handling the gaugefreedom in optimization-based visual-inertial state estimation. Weexperimentally compare three commonly used approaches: fixingthe unobservable states to some given values, setting a prior onsuch states, or letting the states evolve freely during optimization.Specifically, we show that (i) the accuracy and computational timeof the three methods are similar, with the free gauge approachbeing slightly faster; (ii) the covariance estimation from the freegauge approach appears dramatically different, but is actuallytightly related to the other approaches. Our findings are validatedboth in simulation and on real-world datasets and can be usefulfor designing optimization-based visual-inertial state estimationalgorithms.1.2 8 6x (m) 4(b) Gauge fixation approachFig. 1: Different pose uncertainties of the keyframes on the Machine Hallsequence of the EuRoC MAV Dataset [15] (MAV moving toward the negativex direction). The left plot shows the uncertainties from the free gaugeapproach, where no reference frame is selected. On the right we set thereference frame to be the first frame, and, consequently, the uncertaintiesgrow as the VI system moves. For visualization purposes, the uncertaintieshave been enlarged. We can clearly identify the difference in the parameteruncertainties from free gauge and gauge fixation approaches. However, byusing the covariance transformation in Section VI-B, we show that the freegauge covariance can be transformed to satisfy the gauge fixation condition.The transformed uncertainties agree well with the gauge fixation ones.the unobservable DoF in VI systems. It is well known that fora VI system, global position and yaw are not observable [3],[10], which in this paper we call gauge freedom following theconvention from the field of bundle adjustment [11]. Given thisgauge freedom, a natural way to get a unique solution is to fixthe corresponding states (i.e., parameters) in the optimization[12]. Another possibility is to set a prior on the unobservablestates, and the prior essentially acts as a virtual measurementin the optimization [5], [8], [13], [7]. Finally, one may insteadallow the optimization algorithm to change the unobservablestates freely during the iterations. While these three methodsall prove to work in the existing literature, there is no comparison study of their differences in VI state estimation: they areoften presented as implementation details and therefore notwell studied and understood. Moreover, although the similarproblem for vision-only bundle adjustment has already beenstudied (e.g., [11], [14] with 7 unobservable DoFs in themonocular case), to the best of our knowledge, such a studyhas not been done for VI systems (which have 4 unobservableDoFs).In this work, we present the first comparative analysis ofthe different approaches for handling the gauge freedom inoptimization-based visual-inertial state estimation. We compare these approaches, namely the gauge fixation approach,the gauge prior approach and the free gauge approach onsimulated and real-world data in terms of their accuracy,computational cost and estimated covariance (which is of

2IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2018interest for, e.g., active SLAM [16]). While all these methodshave similar performance in terms of estimation error, the freegauge approach is slightly faster, due to the fewer iterationsrequired for convergence. We also find that, as mentionedby [7], in the free gauge approach, the resulting covariancefrom the optimization is not associated to any particularreference frame (as opposed to the one from the gaugefixation approach), which makes it difficult to interpret theuncertainties in a meaningful way. However, in this work wefurther show that by applying a covariance transformation,the free gauge covariance is actually closely related to otherapproaches (see Fig. 1).The rest of the paper is organized as follows. In Section II,we introduce the optimization-based VI state estimation problem and its non-unique solution. In Section III we presentdifferent approaches for handling gauge freedom. Then wedescribe the simulation setup for our comparison study in Section IV. The detailed comparison in terms of accuracy/timingand covariance is presented in Sections V and VI, respectively.Finally, we show experimental results on real-world datasetsin Section VII.II. P ROBLEM F ORMULATION AND I NDETERMINACIESThe problem of visual-inertial state estimation consists ofinferring the motion of a combined camera-inertial (IMU)sensor and the locations of the 3D landmarks seen by thecamera as the sensor moves through the scene. By collectingthe equations of the visual measurements (image points) andthe inertial measurements (accelerometer and gyroscope), theproblem can be written as a non-linear least squares (NLLS)optimization one, where the goal is to minimize the objectivefunction (e.g., assuming Gaussian errors).(1)J(θ) krV (θ)k2ΣV krI (θ)k2ΣI , {z } {z }VisualInertialwhere krk2Σ r Σ 1 r is the squared Mahalanobis norm of theresidual vector r, weighted using the covariance matrix Σ ofthe measurements. The cost (1) can be used in full smoothing[5] or fixed-lag smoothing [7] approaches.The visual term in (1) consists of the reprojection errorbetween the measured image points xij and the predicted onesx̂ij by a metric reconstruction. Assuming a pinhole camera model, x̂ij (θ) Ki (R i Ri pi )(Xj , 1) , where (Ri , pi ) arethe extrinsic parameters of the i-th camera (i 0, . . . , N 1)and Xj are the 3D Euclidean coordinates of the j-th landmarkpoint (j 0, . . . , K 1). We assume that the intrinsic calibrations Ki are noise-free. The inertial term in (1) consists of theerror between the inertial measurements and the predicted onesby a model of the trajectory of the IMU. For example, [17]considers the error in the raw acceleration and angular velocitymeasurements, whereas [5] considers errors in equivalent,lower rate measurements (inertial preintegration terms at therate of the visual data). In this work, we consider the latterformulation, although most of the results do not depend onthe choice of formulation.The parameters of the problem (also known as state),.θ {pi , Ri , vi , Xj },(2)comprise the camera motion parameters1 (extrinsics and linearvelocity) and the 3D scene (landmarks).The accelerometer and gyroscope biases are usually expressed in the IMU frame and thus not affected by a fixationof the coordinate frame. Therefore, we exclude the biasesfrom the state and assume that the IMU measurements arealready corrected. A full description of the inertial and visualmeasurement models is out of the scope of this work, and werefer the reader to [5] for details.A. Solution Ambiguities and Geometrical EquivalenceWhen addressing the VI state estimation problem, it isessential to note that the objective function (1) is invariantto certain transformations of the parameters θ 0 g(θ), i.e.,J(θ) J(g(θ)).(3)Specifically, g, defined by homogeneous matrices of the form Rz t.g ,(4)0 1is a 4-DoF transformation consisting of an arbitrary translationt R3 and a rotation Rz Exp(αez ) by an arbitrary angle(yaw) α ( π, π) around the gravity axis ez (0, 0, 1) .For notation simplicity, we define the mapping Exp(θ) exp(θ ), where exp is the exponential map of the SpecialOrthogonal group SO(3), and θ is the skew-symmetric matrix associated with the cross-product, i.e., a b a b, b.This is the well-known Rodrigues formula.Applying a transformation (4) to the reconstruction (2) givesanother reconstruction g(θ) θ 0 {p0i , R0i , vi0 , X0j },p0i Rz pi tvi0 Rz viR0i Rz RiX0j Rz Xj t(5)Both parameters θ and θ 0 represent the same underlyingscene geometry (camera trajectory and 3D points), i.e., theyare geometrically equivalent. They generate the same predictedmeasurements; and, therefore, the same error (1).As a consequence of the invariance (3), the parameterspace M can be partitioned into disjoint sets of geometricallyequivalent reconstructions. Each of these sets is called anorbit [11] or a leaf [14]. Formally, the orbit associated to θ isthe 4D manifold.Mθ {g(θ) g G},(6)where G is the group of transformations of the form (4). Notethat the objective function (1) is constant on each orbit.The main consequence of the invariance (3) is that (1) doesnot have a unique minimizer because there are infinitely manyreconstructions that achieve the same minimum error: all thereconstructions on the orbit (6) of minimal cost (see Fig. 2),differing only by 4-DoF transformations (4). Hence, the VIestimation problem has some indeterminacies or unobservablestates: there are not enough equations to completely specify aunique solution.1 For simplicity, we assume that the coordinate frames of the camera andthe IMU coincide, e.g., by compensating the camera-IMU calibration [18].

ZHANG et al.: ON THE COMPARISON OF GAUGE FREEDOM HANDLING IN OPTIMIZATION-BASED VISUAL-INERTIAL STATE ESTIMATIONTABLE I: Three gauge handling approaches considered. (n 9N 3K isthe number of parameters in (2))Size of parameter vec.Hessian (Normal eqs)n 4nninverse, (n 4) (n 4)inverse, n npseudoinverse, n nFixed gaugeGauge priorFree gaugeStartFree gaugeGauge priorGauge fixationOrbit of minimum costMθB. Additional Constraints: Specifying a GaugeThe process of completing (1) with additional constraintsc(θ) 0Gauge C(7)that yield a unique solution is called specifying a gauge C[14], [11]. In other words, equations (7) select a representativeof the orbit (6), i.e., to remove the indeterminacy within theequivalence class. In VI, this is achieved by specifying a reference coordinate frame for the 3D reconstruction. For example,the standard gauge in camera-motion estimation consists ofselecting the reconstruction that has the reference coordinateframe located at the first (i 0) camera position and with zeroyaw. These constraints specify a unique transformation (4),and therefore, a unique solution θ C C Mθ among allequivalent ones. By construction, gauges C are transversal toorbits Mθ , so that θ C 6 [14].III. O PTIMIZATION AND G AUGE H ANDLINGFrom an optimization point of view, the minimization of theNLLS function (1) using the Gauss-Newton algorithm presentssome difficulties. Even if we use a minimal parametrizationfor all elements of the state (parameter vector) θ, the Hessianmatrix of (1), which drives the parameter updates, is singulardue to the unobservable DoFs. More specifically, it has a rankdeficiency of four, corresponding to the 4-DoFs in (4).There are several ways to mitigate this issue, as summarizedin Table I. One of them is to optimize in a smaller parameterspace where there are no unobservable states, and therefore theHessian is invertible. This essentially enforces hard constraintson the solution (gauge fixation approach). Another one isto augment the objective function with an additional penalty(which yields an invertible Hessian) to favor that the solutionsatisfies certain constraints, in a soft manner (gauge prior approach). Lastly, one can use the pseudoinverse of the singularHessian to implicitly provide additional constraints (parameterupdates with smallest norm) for a unique solution (free gaugeapproach). The first two strategies require VI problem-specificknowledge (which state to constrain), whereas the last one isgeneric.3Fig. 2: Illustration of the optimization paths taken by different gauge handlingapproaches. The gauge fixation approach always moves on the gauge C,thus satisfying the gauge constraints. The free gauge approach uses thepseudoinverse to select parameter steps of minimal size for a given costdecrease, and therefore, moves perpendicular to the isocontours of the cost (1).The gauge prior approach follows a path in between the gauge fixation andfree gauge approaches. It minimizes a cost augmented by (11), so it may notexactly end up on the orbit of minimum visual-inertial cost (1).Setting the z component of δφq to 0 allows fixating theyaw with respect to Rq . However,concatenating several suchQQ 1updates (Q iterations), RQ q 0 Exp(δφq )R0 , does notfixate the yaw with respect to the initial rotation R0 , andtherefore, this parametrization cannot be used to fix the yawvalue of RQ to that of the initial value R0 .Although yaw fixation or prior can be applied to any camerapose, it is a common practice to use the first camera. Thus, forthe rotations of the other camera poses, we use the standarditerative update (8), and, for the first camera, R0 , we use amore convenient parametrization. Instead of directly using R0 ,we use a left-multiplicative increment:R0 Exp( φ0 )R00 ,(9)where the rotation vector φ0 is initialized to zero and updated. Indeed, the rotation vector formulation has a singularityat k φ0 k π, but it is applicable when the initial rotationis close to the optimal value (k φ0 k π), which is oftenthe case in real systems (e.g., initial values are provided by afront-end, such as [5]).B. Different Approaches for Handling Gauge FreedomA. Rotation Parametrization for Gauge Fixation or PriorBased on the previous discussion, gauge fixation consistsof fixing the position and yaw angle of the first camera posethroughout the optimization. This is achieved by setting.p0 p00 , φ0z e (10)z φ0 0,One problem with the gauge fixation and gauge priorapproaches is that fixing the 1-DoF yaw rotation angle of acamera pose is not straightforward, as we discuss next.The standard method to update orientation variables (i.e.,rotations) during the iterations of the NLLS solver (GaussNewton or Levenberg-Marquardt–LM) of (1) is to use localcoordinates, where, at the q-th iteration, the update iswhere p00 is the initial position of the first camera. Fixingthese values of the parameter vector is equivalent to setting thecorresponding columns of the Jacobian of the residual vectorin (1) to zero, namely Jp0 0, J φ0z 0.The gauge prior approach adds to (1) a penalty.20krPwhere rP(11)0 kΣP ,0 (θ) (p0 p0 , φ0z ).q 1Rqq Exp(δφ )R .(8)0The choice of ΣP0 in (11) will be discussed in Section V.

4IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2018Finally, the free gauge approach lets the parameter vectorevolve freely during the optimization. To deal with the singularHessian, we may use the pseudoinverse or add some damping(Levenberg-Marquardt algorithm) so that the NLLS problemhas a well-defined parameter update.A comparison of the paths followed in parameter spaceduring the optimization iterations of the three approaches isillustrated in Fig. 2.Next, we show an experimental comparison of the threegauge handling approaches.IV. C OMPARISON S TUDY: S IMULATION S ETUPA. Data GenerationWe use three 6-DoF trajectories for our experiments, namelya sine-like shape one, an arc-like one and a rectangular one.We denote them as sine, arc and rec respectively. We considertwo landmark configurations: plane, where the 3D points areroughly distributed on several planes and random, where the3D points are generated randomly along the trajectory. Fig. 3shows some simulation setup examples.To generate the inertial measurements, we fit the trajectoriesusing B-splines and then sample the accelerations and angularvelocities. The sampled values are corrupted with biasesand additive Gaussian noise, and then are used as inertialmeasurements. For the visual measurements, we project the 3Dpoints through a pinhole camera model to get the corresponding image coordinates and then corrupt them with additiveGaussian noise.B. Optimization SolverTo solve the VI state estimation problem (1), we use theLM algorithm in the Ceres solver [19]. We implement the different approaches for handling the gauge freedom described inSection III. For each trajectory, we sample several keyframesalong the trajectory. Our parameter space contains the states(i.e., position, rotation and velocity) at these keyframes andthe positions of the 3D points. The initial states are disturbedrandomly from the groundtruth.C. Evaluation1) Accuracy: To evaluate the accuracy of an estimated state,we first calculate a transformation to align the estimation andthe groundtruth. The transformation is calculated from thefirst poses of both trajectories. Note that the transformationhas four DoFs, i.e., a translation and a rotation around thegravity vector. After alignment, we calculate the root meansquared error (RMSE) of all the keyframes. Specifically, weuse the Euclidean distance for position and velocity errors.For rotation estimation, we first calculate the relative rotation(in angle-axis representation) between the aligned rotation andthe groundtruth, and then use the angle of the relative rotationas the rotation error.2) Computational Efficiency: To evaluate the computationalcost, we record the convergence time and number of iterationsof the solver. We run each configuration (i.e., the combinationof trajectory and points) for 50 trials and calculate the averagetime and accuracy metrics.Fig. 3: Sample simulation scenarios. The left one shows a sine trajectory withrandomly generated 3D points, and the right one shows an arc trajectory withthe 3D points distributed on two planes.3) Covariance: We also compare the covariances producedby the optimization algorithm, which are of interest for applications such as active SLAM [20]. The covariance matrix ofthe estimated parameters is given by the inverse of the Hessian.For the free gauge approach, the Moore-Penrose pseudoinverseis used, since the Hessian is singular [11].V. C OMPARISON S TUDY: T IMING AND ACCURACYA. Gauge Prior: Choosing the Appropriate Prior WeightBefore comparing the three approaches from Section III,we need to choose the prior covariance ΣP0 in the gauge prior2approach. A common choice is ΣP0 σ0 I, for which theP 2PP 2prior (11) becomes kr0 kΣP w kr0 k , with wP 1/σ02 .0We tested a wide range of the prior weight wP on differentconfigurations and the results were similar. Therefore, we willlook at one configuration in detail. Note that wP 0 isessentially the free gauge approach, whereas wP isthe gauge fixation approach.1) Accuracy: Fig. 4 shows how the RMSE changes withthe prior weight. It can be seen that the estimation errors ofdifferent prior weights are very similar (note the numbers onthe vertical axis). While there is no clear optimal prior weightfor different configurations of trajectories and 3D points, theRMSE stabilizes at one value after the weight increases abovea certain threshold (e.g., 500 in Fig. 4).2) Computational Cost: Fig. 5 illustrates the computationalcost for different prior weights. Similarly to Fig. 4, the numberof iterations and the convergence time stabilize when the priorweight is above a certain value. Interestingly, there is a peakin the computational time when the prior weight increasesfrom zero to the threshold where it stabilizes. The samebehavior is observed for all configurations. To investigate thisbehavior in detail, we plot in Fig. 6 the prior error withrespect to the average reprojection error at each iteration forseveral prior weight values. The position prior error is theEuclidean distance between the current estimate of the firstposition and its initial value, the yaw prior error is the zcomponent of the relative rotation of the current estimate of thefirst rotation with respect to its initial value, and the averagereprojection error is the total visual residual averaged by thenumber of observed 3D points in all keyframes. For very largeprior weights (108 in the plot), the algorithm decreases thereprojection error while keeping the prior error almost equal to

10 24.2015Rotation (rad)3.0e 00 1.0e 01 1.0e 02 5.0e 02 5.0e 031.10086 101.0e 055.0e 06 11.100841.10082Velocity (m/s)3.0e 00 1.0e 01 1.0e 02 5.0e 02 5.0e 032.22271.0e 055.0e 06 10 22.22260.0150.0101.0e 055.0e 0612108640.0e 003.0e 007.0e 001.0e 015.0e 011.0e 023.0e 025.0e 021.0e 035.0e 030.0e 003.0e 007.0e 001.0e 015.0e 011.0e 02Prior Weight3.0e 025.0e 021.0e 035.0e 030.030.020.01Fig. 5: Number of iterations and computing time for different prior weights.zero. In contrast, for smaller prior weights (e.g., 50–500), theoptimization algorithm reduces the reprojection error duringthe first two iterations at the expense of increasing the priorerror. Then the optimization algorithm spends many iterationsfine-tuning the prior error while keeping the reprojection errorsmall (moving along the orbit), hence the computational timeincreases.3) Discussion: While the accuracy of the solution doesnot significantly change for different prior weights (Fig. 4),a proper choice of the prior weight is required in the gaugeprior approach to keep the computational cost small (Fig. 5).Extremely large weights are discarded since they sometimesmake the optimization unstable. We observe similar behaviorfor different configurations (trajectory and points combination). Therefore, in the rest of the section we use a properprior weight (e.g., 105 ) for the gauge prior approach.B. Accuracy and Computational EffortWe compare the performance of the three approaches onthe six combinations of simulated trajectories (sine, arc andrec) and 3D points (plane and random). We optimize theobjective function for differently perturbed initializations andobserve that the results are similar. For the results presentedin this section, we perturb the groundtruth positions by arandom vector of 5 cm (with respect to a trajectory of 5 m), theorientations by a random rotation of 6 degrees, the velocitiesby a uniformly distributed variable in [ 0.05, 0.05] m/s (withrespect to a mean velocity of 2 m/s) and the 3D point positionsby a uniform random variable in [ 7.5, 7.5] cm.0.0355Free7.0e 001.0e 025.0e 021.0e 080.0300.0250.0200.0150.0100.0050.000 endFig. 4: RMSE in position, orientation and velocity for different prior weightsNumber of iterations (#)0.0200.0053.0e 00 1.0e 01 1.0e 02 5.0e 02 5.0e 03Prior WeightConvergence time (sec)Free7.0e 001.0e 025.0e 021.0e 080.025Position Prior Error (m)4.2020Yaw Prior Error (rad)Position (m)ZHANG et al.: ON THE COMPARISON OF GAUGE FREEDOM HANDLING IN OPTIMIZATION-BASED VISUAL-INERTIAL STATE ESTIMATIONstart51015Average Reprojection Error (px)0.000 endstart51015Average Reprojection Error (px)Fig. 6: Prior error vs. average reprojection error for some representative priorweights. Each dot in the plot stands for an iteration with the correspondingprior weight. The optimization starts from the bottom-right corner, wherethe reprojection errors are the same and the prior errors are zero. As theoptimization proceeds, the reprojection error decreases and there are differentbehaviors for different prior weights regarding the prior error. Note that thefree gauge case behaves as the zero prior weight.The average RMSEs of 50 trials are listed in Table II. Weomit the results for the gauge prior approach because they areidentical to the ones from the gauge fixation approach up toaround 8 digits after the decimal. It can be seen that there areonly small differences between the free gauge approach andthe gauge fixation approach, and neither of them has a betteraccuracy in all simulated configurations.The convergence time and number of iterations are plottedin Fig. 7. The computational cost of the gauge prior approachand the gauge fixation approach are almost identical. Thefree gauge approach is slightly faster than the other two.Specifically, except for the sine trajectory with random 3Dpoints, the free gauge approach takes fewer iterations andless time to converge. Note that the gauge fixation approachtakes the least time per iteration due to the smaller number ofvariables in the optimization (see Table I).C. DiscussionBased on the results in this section, we conclude that: The three approaches have almost the same accuracy. In the gauge prior approach, one needs to select the properprior weight to avoid increasing the computational cost. With a proper weight, the gauge prior approach has almostthe same performance (accuracy and computational cost) asthe gauge fixation approach. The free gauge approach is slightly faster than the others,because it takes fewer iterations to converge (cf. [14]).While it may be possible to fix the unobservable DoFs (recallthat we use a tailored parametrization (9) to fix the yaw DoF),the free gauge approach has the additional advantage that isgeneric, i.e., not specific of VI, and therefore it does notrequire any special treatment on rotation parametrization.VI. C OMPARISON S TUDY: C OVARIANCEA. Covariance ComparisonGiven a high prior weight, as discussed in the previoussection, the covariance matrix from the gauge prior approach

6IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2018TABLE II: RMSE on different trajectories and 3D points configurations. Thesmallest errors (e.g., p gauge fixation vs. p free gauge) are highlighted.Configurationpsine planearc planerec planesine randomarc randomrec randomGauge 3030.014960.019020.011670.009882Free 0.014950.018860.011660.009881Time (sec)Iterations (#)Time per iter. (ratio)FixFreePriorrandom sinerandom arcrandom recplane sineplane arcQCθC θC QCθCgPosition, rotation and velocity RMSE are measured in m, deg and m/s, respectively.0.00 θC θTθ (Mθ )p0.05 θplane rec5.02.5random sinerandom arcrandom recplane sineplane arcplane recrandom sinerandom arcrandom recplane sineplane arcplane rec1.021.011.00Fig. 7: Number of iterations, total convergence time and time per iterationfor all configurations. The time per iteration is the ratio with respect to thegauge fixation approach (in blue), which takes least time per iteration.is similar to the gauge fixation approach and therefore omittedhere. We only compare the covariances of the free gaugeapproach and the gauge fixation approach in this section.An example of the covariance matrices of the free gaugeand gauge fixation approaches is visualized in Fig. 9. If welook at the top-left block of the covariance matrix, whichcorresponds to the position components of the states: (i) forthe gauge fixation approach (Fig. 9c), the uncertainty of thefirst position is zero due to the fixation, and the positionuncertainty increases afterwards (cf. Fig. 1b); (ii) in contrast,the uncertainty in the free gauge case (Fig. .9a) is “distributed”over all the positions (cf. Fig. 1a). This is due to the fact thatthe free gauge approach is not fixed to any reference frame.Therefore, the uncertainties directly read from the free gaugecovariance matrix are not interpretable in a geometricallymeaningful way. However, this does not mean the covarianceestimation from the free gauge approach is useless: it can betransformed to a geometrically-meaningful form by enforcinga gauge fixation condition, as we show next.B. Covariance TransformationCovariances are averages of squared perturbations of theestimated parameter. A perturbation θ of a reconstructionθ can be decomposed into two components: one parallel tothe orbit Mθ (6) and one parallel to the gauge C (7). Thecomponent of θ parallel to the orbit Mθ is not geometricallymeaningful since the perturbed reconstruction is also in theorbit (thus, arbitrarily large perturbations produce no change

The problem of visual-inertial state estimation consists of inferring the motion of a combined camera-inertial (IMU) sensor and the locations of the 3D landmarks seen by the camera as the sensor moves through the scene. By collecting the equations of the visual measurements (image points) and the inertial measurements (accelerometer and .

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

pressure gauges phenolic case pressure gauge ss brass test pressure gauge pressure gauge electronic pressure switch hydrostatic pressure transmitter industrial pressure . comparator digital torque wrench digital vernier caliper height gauge hi-lo welding gauge crankshaft test bevel protector gauge push pull gauge pin gage couter profile gauge

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.