Sparse Gaussian Processes On Matrix Lie Groups: A Unified Framework For .

1y ago
4 Views
3 Downloads
3.15 MB
8 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Kian Swinton
Transcription

Sparse Gaussian Processes on Matrix Lie Groups:A Unified Framework for Optimizing Continuous-Time TrajectoriesJing Dong, Mustafa Mukadam, Byron Boots, and Frank DellaertAbstract— Continuous-time trajectories are useful for reasoning about robot motion in a wide range of tasks. SparseGaussian processes (GPs) can be used as a non-parametricrepresentation for trajectory distributions that enables fasttrajectory optimization by sparse GP regression. However, mostprevious approaches that utilize sparse GPs for trajectoryoptimization are limited by the fact that the robot state isrepresented in vector space. In this paper, we first extendprevious works to consider the state on general matrix Liegroups by applying a constant-velocity prior and defining locallylinear GPs. Next, we discuss how sparse GPs on Lie groupsprovide a unified continuous-time framework for trajectory optimization for solving a number of robotics problems includingstate estimation and motion planning. Finally, we demonstrateand evaluate our approach on several different estimationand motion planning tasks with both synthetic and real-worldexperiments.I. I NTRODUCTIONConsider the problem of simultaneous localization andmapping (SLAM) [1], [2]. Solving this problem is a fundamental capability for autonomous mobile robots that mustexplore and plan in previously unseen environments. WhileSLAM is a well-studied area of robotics, the majority of existing SLAM algorithms rely on discrete-time representationsof robot trajectories. Although discrete-time representationsare sufficient for many tasks, they are often difficult to usein several important scenarios, including: (1) when sensorsmeasure the environment continuously, for example spinningLIDAR or rolling-shutter cameras produce measurementsthat may be distorted by the robot’s self-motion; and (2)when sensor measurements arrive asynchronously.While heuristics are often employed to overcome thesechallenges, a theoretically sound framework for handlingcontinuous motion and/or measurements can result in betteraccuracy. Unlike trajectory representations that are denselyparameterized with discrete states at frequent, regular timeintervals, continuous-time representations are often sparselyparameterized, but contain mechanisms that allow the trajectory to be queried to recover the robot state at any time ofinterest. Several popular continuous-time trajectory representations include linear interpolation [3], [4], [5], splines [6],[7], [9], [10], [11], and hierarchical wavelets [12].This paper focuses on an alternative probabilistic, nonparametric representation for trajectories based on GausJ. Dong, M. Mukadam, B. Boots, and F. Dellaert are with Institute for Robotics & Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA. ech.edu. This work was supported by National Institute of Food and Agriculture, USDA award 2014-67021-22556and NSF NRI award 1637908Fig. 1: Some experiments we evaluated the proposed GPs onLie groups, from up to down left to right: 2D planar localization using CMU range-only dataset [13]; 3D monocularvisual-inertial SLAM; Full-body planning of PR2 robot insimulation; Full-body planning of Vector robot in real-world.sian processes (GPs). Tong et al. [14], [15] showed thatsimultaneous trajectory estimation and mapping (STEAM),the continuous-time extension of SLAM, can be reducedto GP regression. By placing various GP priors on robottrajectories, this approach can solve different types of trajectory estimation problems. However, if standard kernels, suchas the squared exponential kernel, are used, the method isprohibitively expensive with time and space complexity thatscales polynomially with the number of parameters.Maintaining sparsity in SLAM problems has been wellstudied [2], [17], and it is the key to scalable optimization in many modern SLAM algorithms. In the context ofcontinuous-time SLAM, Barfoot et al. [18] shows that byapplying a linear time-varying stochastic differential equation(LTV-SDE) prior on trajectories, the inverse kernel matricesused during optimization are exactly sparse, leading to efficient GP regression that can solve SLAM problems. Thisapproach is further extended to nonlinear SDEs [19] andincremental settings [20].Other areas of robotics have recently benefited from thesetechniques as well. For example, recent work in motionplanning [21], [22], [23] takes advantage of sparse and incremental GP regression to speed up trajectory optimization,leading to very efficient motion planning algorithms.A major drawback of all of these GP-based approachesis that they only reason about trajectories where systemstates evolve in a vector space, which may not be a valid

assumption for many systems. For example, typical vectorvalued representations for rotation of a 3D rigid-body eitherexhibit singularities (Euler angles) or impose extra nonlinearconstraints (quaternions).Trajectories in many robotics applications can be definedon more general Lie groups, rather than simple vector spaces.For example, rotation of a 3D rigid body is the specialorthogonal group SO(3), transformation of a 3D rigid bodyis the special Euclidean group SE(3), and state estimatedfrom a monocular camera with scale drift information is thesimilarity group Sim(3) [24], [25].Sparse GP regression for STEAM [18] has been extendedto SE(3) by Anderson et al. [26]. Anderson et al. selectconstant body-frame velocity prior as GP prior, which isphysically motivated by inertia, meanwhile generated bylocal LTV-SDEs. In this paper, we further extend sparse GPregression to general Lie groups, enabling more applicationsof sparse GPs: rotation estimation on SO(3), monocularvisual SLAM on Sim(3), and applications other than stateestimation, like motion planning on Lie groups.Along with our technical contributions, we, for the firsttime, provide a novel insight that Gaussian processes on Liegroups can be viewed as a unified tool for reasoning aboutcontinuous-time trajectories in various robotics applications,including state estimation and motion planning, since theyshare the same GP continuous-time trajectory representation,the same factor graph problem structure, and the samemaximum a posteriori probabilistic inference framework.Our specific contributions include: Extending sparse GPs [18], [26] to general matrix Liegroups. A new perspective that views sparse GP regression onLie groups as a unified trajectory optimization tool thatcan be used to reason about, and sometimes simultaneously solve, a variety of robotics tasks. Extensive experimental evaluations on different types ofreal-world robotics problems, showcasing the effectiveness of reasoning about trajectories with sparse GPs onLie groups.II. P RELIMINARIESWe begin by formulating continuous-time trajectory estimation problems as GP regression, review how a sparse GPprior can be defined on vector spaces, and finally give a verybrief review of Lie group fundamentals. For a full treatmentthe readers are encouraged to refer [18] for GP regressionas trajectory estimation and [27] for Lie group theory.A. Problem DefinitionWe consider the problem of continuous-time trajectoryestimation, in which a continuous-time system state x(t)is estimated from observations [14]. The system model isdescribed asx(t) GP(µ(t), K(t, t0 ))zi hi (x(ti )) ni , ni N (0, Σi ),(1)(2)where x(t) is represented by a GP with mean function µ(t)and covariance function K(t, t0 ). Measurements zi at timeti are obtained by the (generally nonlinear) discrete-timemeasurement function hi in Eq. (2) and assumed to becorrupted by zero-mean Gaussian noise with covariance Σi .B. Maximum a Posteriori EstimationGP regression is performed by maximum a posteriori(MAP) inference, where the most likely trajectory is themode (and also the mean) of the posterior distribution i.e. theGP in Eq. (1) conditioned on the measurements. The MAPestimate of the trajectory can be computed through Gaussianprocess Gauss-Newton (GPGN) [14]. To define the objectivefunction, we assume that there are M observations and µ(t1 )x(t1 ). . ,x . , µ . , K [K(ti , tj )]µ(tM )x(tM ) z1h1 (x1 )Σ1. . . . .z . , h(x) ,Σ zMhM (xM )ij,1 i,j M . .ΣMThe MAP estimation objective can then be written as 1122 k x µ kK k h(x) z kΣ , (3)x argmax22x.where kkΣ is Mahalanobis distance defined as k x k2Σ 1x Σ x. MAP estimation is thus formulated as a nonlinearleast squares optimization problem.We use a Gauss-Newton approach to solve the nonlinearleast squares problem. By linearizing the measurement function hi around a linearization point xi , we obtain. hihi (xi δxi ) hi (xi ) Hi δxi , Hi xxi,(4)in which Hi is the Jacobian matrix of the measurement.function (2) at linearization point xi . By defining H diag(H1 , . . . , HM ), we can generate a linearized leastsquares problem around the linearization points x δx argmaxδx11k x δx µ k2K k h(x) Hδx z k2Σ22.(5)The GPGN algorithm starts from some initial guess of x,and then at each iteration, the optimal perturbation δx isfound by solving the linear system(K 1 H Σ 1 H)δx K 1 (µ x) H Σ 1 (z h)(6)and updating the solution x x δx until convergence.The information matrix K 1 in Eq. (6) encodes the GPprior information, and H Σ 1 H represents informationfrom measurements. H Σ 1 H is block-wise sparse in mostSLAM problems [2], but K 1 is not usually sparse for mostcommonly used kernels. Next, we define GP priors withsparse structure that can be exploited to efficiently solve thenonlinear least squares problem above.

C. Sparse GP Priors on Vector SpaceWe now describe a class of exactly sparse GP priors onvector space for trajectory estimation proposed in Barfootet al. [18]. Here GP priors are considered on vector-valuedsystem states x(t) RN generated by linear time-varyingstochastic differential equations (LTV-SDEs)ẋ(t) A(t)x(t) u(t) F(t)w(t),(7)where u(t) is the known system control input, w(t) is whiteprocess noise, and both A(t) and F(t) are time-varyingsystem matrices. The white process noise is represented byw(t) GP(0, QC δ(t t0 )),(8)where QC is the power-spectral density matrix, which is ahyperparameter [19], and δ(t t0 ) is the Dirac delta function.The mean and covariance of the LTV-SDE generated GP areZ tΦ(t, s)u(s)ds(9)µ(t) Φ(t, t0 )µ0 t0K(t, t0 ) Φ(t, t0 )K0 Φ(t0 , t0 ) Z min(t,t0 ) Φ(t, s)F(s)QC F(s) Φ(t0 , s) ds(10)t0where µ0 is the initial mean value of first state, K0 is thecovariance of first state, and Φ(t, s) is transition matrix. Ifthe system is generated by the LTV-SDE in Eq. (7), theinverse covariance matrix K 1 is block-tridiagonal [18].A commonly used GP prior is the constant-velocity prior,which is derived from our understanding of the physicalproperties of robotic systems. In most robots, large acceleration implies extreme force, which may be harmful if directlyapplied to the system. Therefore, acceleration should be keptto a minimum, to the extent possible.The constant-velocity GP prior is generated by a LTV-SDEwith white noise on the acceleration and has been used intrajectory estimation [14], [18]p̈(t) w(t),(11)where p(t) is the N -dimensional vector-valued position (orpose) variable of trajectory. To convert this prior into theLTV-SDE form of Eq. (7), a Markov system state variableis declared . p(t)x(t) .(12)ṗ(t)The prior in Eq. (11) then can easily be converted into aLTV-SDE in Eq. (7) by defining 0 I0A(t) , u(t) 0, F(t) .(13)0 0ID. Lie Group BasicsA group is an algebraic structure {G, } consisting ofa set of elements G and an operator . We limit ourdiscussion on matrix Lie groups, whose elements are square,invertible matrices, and is matrix multiplication. EveryN -dimensional matrix Lie group G has an associated Liealgebra g [27, p.16]. The Lie algebra g coincides with thelocal tangent space to the manifold of G. The exponentialmap exp : g G and logarithm map log : G gdefine the mapping between the Lie group and Lie algebrarespectively [27, p.18]. G also has an associated hat operator : RN g and vee operator : g RN that convertselements in local coordinates RN to the Lie algebra g andvice versa [27, p.20]. The vector space RN is just a trivialLie group, where exp, log, and are all identity.III. S PARSE GP P RIORS ON L IE G ROUPSThis section summarizes how GP priors are defined on Liegroups. For more details, readers are encouraged to read theauthor’s technical report [28].A. Constant Body-Frame Velocity PriorWe use T G to represent an object in Lie group G,so the continuous-time trajectory is written as T (t), andtrajectory measurements are observed at times t1 , . . . , tM ,the associated states are T1 , . . . , TM . To estimate T (t), wefirst define the Markov system state by appending the statewith velocity.x(t) {T (t), (t)},(14)where (t) is the ‘body-frame velocity’ variable1 defined. (t) (T (t) 1 Ṫ (t)) .(15)The operator can be always applied to T (t) 1 Ṫ (t) since T G, T 1 Ṫ g [27, p.20].Similar to Eq. (11), we can define the constant ‘bodyframe velocity’ prior on Lie groups as ̇(t) w(t), w(t) GP(0, QC δ(t t0 )),(16)but this is a nonlinear SDE, which does not match the LTVSDE defined by Eq. (7).B. Locally Linear Constant-Velocity GP PriorsTo define a LTV-SDE which can leverage the constantvelocity GP prior, we linearize the Lie group manifoldaround each Ti , and define both a local GP and LTV-SDEon the linear tangent space. We first define a local GP forany time t on trajectory which meets ti t ti 1 ,T (t) Ti exp(ξi (t) ), ξi (t) N (0, K(ti , t)).(17)the local pose variable ξi (t) RN around Ti is defined by.ξi (t) log(Ti 1 T (t)) .(18)The local LTV-SDE that represents constant-velocity information isξ i (t) w(t), w(t) GP(0, QC δ(t t0 )).If we define the local Markov system state . ξi (i)γi (t) ,ξi (t)(19)(20)1 In SO(3) and SE(3), (t) is the body-frame velocity (see [29, p.52]and [29, p.55]), so we just define and call (t) the ‘body-frame velocity’for general Lie groups.

xi 1x0(21)To prove the equivalence between the nonlinear SDE inEq. (16) and the local LTV-SDE in Eq. (19), we first lookat equation [27, p.26] T (t) 1 Ṫ (t) J r (ξi (t))ξ̇i (t) ,(22)where J r is the right Jacobian of G. With Eq. (15) we haveξ̇i (t) J r (ξi (t)) 1 (t).(23)If the small time interval assumption between any ti andti 1 is satisfied, we have small enough ξi (t) with a goodapproximation of right Jacobian J r (ξi (t)) I andξ̇i (t) (t).(24)So we have proved that the LTV-SDE in Eq. (19) and (21)is a good approximation of constant ‘body-frame velocity’prior defined by Eq. (16).Both the local GP and LTV-SDE are defined on the tangentspace, so they should only be applied around the currentlinearization point Ti . But if all ti and ti 1 pairs have asmall enough interval, the GP and LTV-SDE can be definedin a piecewise manner, and every point on the trajectory canbe converted to a local variable ξi (t) based on its nearbyestimated state Ti .Note that, although we assume that time intervals betweenall ti and ti 1 are small to prove LTV-SDE in Eq. (19) is agood approximation of constant ‘body-frame velocity’ priorby Eq. (16), this assumption does not need to be satisfied tomake the above LTV-SDE a valid GP prior. So we can stilluse the proposed sparse GP when trajectory time intervals arelarge, although the actual GP prior applied is less accuratecompared to the true constant ‘body-frame velocity’ prior.Note that a special case of the above theory to SE(3), wasproposed in previous work [26], from which our extensionto general matrix Lie groups is inspired. 1J0 12 e 0 K0 e0 ,e0 x0 µ0where ti ti 1 ti .Since the GP prior cost Jgp has been written as a sumof squared cost terms, and each cost term is only relatedto nearby (local) Markov states, we can represent the leastsquares problem by factor graph models. In factor graphsthe system states are represented by variable factors, andthe cost terms are represented by cost factors. An examplefactor graph is shown in Fig. 2. By converting nonlinearleast squares problems into factor graphs, we can take advantage of factor graph inference tools to solve the problemsefficiently. Additional information about the relationshipbetween factor graphs and sparse GP and SLAM problemscan be found in [2], [18], [19], [20]D. Querying the TrajectoryOne of the advantages of representing the continuous-timetrajectory as a GP is that we have the ability to query the stateof the robot at any time along the trajectory. For constantvelocity GP priors, the system state x(τ ), ti τ ti 1 canbe estimated by two nearby states x(ti ) and x(ti 1 ) [18],which allows efficient O(1) interpolation. We first calculatethe mean value of local state γ̂i (τ )γ̂i (τ ) Λ(τ )γ̂i (ti ) Ψ(τ )γ̂i (ti 1 ),whereΛ(τ ) Φ(τ, ti ) Qτ Φ(τ, ti ) Q 1i 1 Φ(ti 1 , ti ),Ψ(τ ) Qτ Φ(τ, ti ) Q 1i 1 . ̂(τ ) J rei Φ(ti 1 , ti )γi (ti ) γi (ti 1 ),(27)where Qi is the covariance matrix [18] 10 1 3 t Q(t s)1, Qi 31 i2 C1 ti QC21 t2i QC2 ti QC , (28)(30)(31) Λ1 (τ )γ̂i (ti ) Ψ1 (τ )γ̂i (ti 1 ), 1 Λ2 (τ )γ̂i (ti ) Ψ2 (τ )γ̂i (ti 1 ) ,ξ̂i (τ )Once the local GP and constant-velocity LTV-SDE aredefined, we can write down the cost function Jgp used toincorporate information about the GP prior into the nonlinearleast squares optimization in Eq. (3). As discussed, the GPprior cost function has the generic form1k µ x k2K ,(25)2but if the trajectory is generated by a constant-velocity LTVSDE in Eq. (21), the GP prior cost can be specified as [18]X1 1Jgp e (26)i Qi ei ,2i(29)Once we have the mean value of local state γ̂i (τ ), the meanvalue of the full state x̂(τ ) {T̂ (τ ), ̂(τ )} isC. A Factor Graph PerspectiveJgp 1Ji 12 e i Qi ei ,ei Φ(ti 1 , ti )γi (ti ) γi (ti 1 )Fig. 2: An example factor graph, showing states (triangles)and factors (black boxes). GP prior factors connect consecutive states, and define the prior information on first state.T̂ (τ ) T̂i expΦ(t, s) xix1the local LTV-SDE is rewritten as d ξi (t)ξ̇ (t) i.γ̇i (t) w(t)dt ξ̇i (t) (32)(33)where Λ1 (τ )Ψ1 (τ ), Ψ(τ ) ,Λ2 (τ )Ψ2 (τ ) 0ξ̂i (ti 1 )γ̂i (ti ) , γ̂i (ti 1 ) , ̂(ti )J r (ξ̂i (ti 1 )) 1 ̂(ti 1 )Λ(τ ) ξ̂i (τ ) log(T̂i 1 T̂ (τ )) ,ξ̂i (ti 1 ) log(T̂i 1 T̂i 1 ) ,E. Fusion of Asynchronous MeasurementsContinuous-time trajectory interpolation affords GP-basedtrajectory estimation methods several advantages overdiscrete-time localization algorithms. In addition to providinga method for querying the trajectory at any time of interest,GP interpolation can be used to reduce the number of statesneeded to represent the robot’s trajectory, and elegantlyhandle asynchronous measurements.

x(ti )x(τ )x(ti 1 )(a) Meausurementx(ti )x(τ )x(ti 1 )(a) Estimation [18], [26], [20](b) Interpolated FactorFig. 3: (a) Measurement at time τ , dashed line indicatesit’s not an actual factor. (b) The interpolated factor encodesmeasurement at time τ .Assume there is a measurement zτ of state x(τ ) availableat an arbitrary time τ, ti τ ti 1 , with measurementfunction hτ and corresponding covariance Στ . The measurement cost can be incorporated as a ‘virtual unary factor’ inthe graph and is given by1Jτ (x(τ )) k zτ hτ (x(τ )) k2Στ .(34)2Since system state x(τ ) is not explicitly available for optimization, we perform trajectory interpolation between xiand xi 1 by Eq. (32) – (33), and rewrite the cost in termsof the interpolated mean value x̂(τ )1k zτ hτ (x̂(τ )) k2Στ .(35)2Because the measurement cost is represented by xi and xi 1 ,a binary ‘interpolated’ factor can be added to the factorgraph, and xi and xi 1 are optimized without explicitlyadding an additional state. The factor graph illustration isshown in Fig. 3.Jτ (xi , xi 1 ) IV. GP S AS A U NIFIED T RAJECTORY R EPRESENTATIONEstimation: As noted in the introduction, the SLAM community has been using continuous-time trajectory representations for many years to naturally handle asynchronous measurements and continuous-time sensors like rolling shuttercamera or laser scanner. Compared to standard continuoustime trajectory representations like splines, using sparse GPsin SLAM [18], [26], [20] has several special advantages: A GP represents a trajectory distribution, so the meantrajectory of interest naturally has a notion of associateduncertainty. The sparse GP prior does not break the inherent sparsityunderlying SLAM problems, and aligns well with theirfactor graph structure (shown in Fig. 4(a)). The sparse factor graph structure allows for the use ofefficient incremental inference tools, like iSAM2 [30],to enable online incremental GP regression [20]. The constant velocity prior [26] has physical meaning.Planning: Recent work in motion planning has used sparseGPs for fast trajectory interpolation [21] and then later, GPregression to solve trajectory optimization problems [22]. Formotion planning, sparse GPs provide the following benefits: Structure exploiting inference can be performed onfactor graphs (shown in Fig. 4(b)) as a direct resultof using a sparse GP prior, yielding efficient motionplanning algorithms. The number of states to optimize can be significantlyreduced by parameterizing the trajectory with a few(b) Planning [22](c) STEAP [31]Fig. 4: Factor graphs comparing GP-based approaches forreasoning about continuous-time trajectories. White circlesare states of the trajectories, diamonds are landmarks; blackcircles are GP prior factors, and squares are measurementfactors. Gray circles indicate the states at the current time.support states and using O(1)-time GP interpolation,further improving efficiency. The factor graph structure also allows for incrementalinference tools [30] to speed up the solution to incremental problems including online replanning.It is evident from Fig. 4(a) and (b) that both estimationand planning share a similar factor graph structure in whichthe GP prior is defined as in Fig. 2. Both problems canbe solved via MAP estimation on factor graphs. The fundamental difference is that the estimation graph is backwardlooking while the planning graph is forward-looking. Whenan autonomous robot needs to perform these two tasks atthe same time, a logical idea is to combine these two factorgraph as a single one, and performing MAP estimation onthe combined graph. This removes redundancy and allows forsolving the two problems simultaneously. The factor graphfor this technique, called “simultaneous trajectory estimationand planning” (STEAP) [31], is shown in Fig. 4(c). Theinformation flow between the estimation and planning subgraphs enhances the overall quality of the solution and thefactor graph structure allows the use of incremental inferencetools, like iSAM2 [30]. Thus, STEAP can naturally solveclosed-loop online replanning and yield a fast solution totrajectory estimation.Estimation, planning, or STEAP can be derived from acommon framework by simply changing the factors or the‘current state’ designation. A unifying theme is the useof a sparse GP prior to represent trajectory distributionsand MAP inference, i.e. GP regression, to find the optimaltrajectory. Sparse GPs not only provide efficiency and modeluncertainty, but also help bridge the gap between variousrobotics problems, allowing them to be combined and solvedsimultaneously. In the next section, we provide experimentalresults using our extension to sparse GPs on Lie groups forestimation and planning tasks that cannot be solved well byusing states defined in vector space.V. E XPERIMENTAL R ESULTSWe evaluated our framework on three different trajectoryestimation tasks on SE(2), SO(3) and Sim(3), and one

Dead Reckoned PathGround Truth TrajectoryEstimated TrajectoryGround Truth LandmarkEstimated Landmark605.0Y (m)Dead Reckoned PathGround Truth TrajectoryEstimated TrajectoryGround Truth LandmarkEstimated Landmark6040-5.05.0X (m)Y (m)Y (m)4020202040-60-40-20020X (m)TABLE I: Planar SLAM comparison, Linear and SE(2).Position RMS (m)Rotation RMS (deg)Landmark RMS (m)Optimization Time 1.9982.107Plaza2Linear 0X (m)Fig. 5: Planar SLAM results on the Plaza1/2 datasets [13],with odometry-only and ground truth trajectories.0-0.5Fig. 6: Trajectory error and 3σ variance estimated of Plaza1dataset. Green lines are states with range measurements, andred segments are states without range measurements.Pitch Error0.4Ground 2-0.40N101520303540455030354045503035404550Ground truthGyro-onlyEstimated0.2Rad25t (s)Roll Error0.40-0.2-0.40510152025t (s)Gyro Bias10 -32Axis 1Axis 21Rad/splanning task on SE(2) R . We have open sourced the C code for our library that implements GPs on Lie groups,building on the GTSAM library2 for both estimation3 andplanning4 . Trajectory estimation on SE(3) (a special case ofour general Lie group formulation) has been reported in [26],and planning on vector space (a trivial Lie group) has beenreported in [21], [22], so we do not repeat those evaluations.50-1A. SE(2): 2D Planar Range-Only SLAM-20We conducted experiments on two range-only 2D SLAMproblems to evaluate the proposed GP approach on SE(2).The datasets are Plaza1 and Plaza2 from Djugash et al. [13].Both datasets were collected by a lawn mower equipped witha gyroscope and wheel encoders to measure odometry, and aradio node measuring ranges of four fixed beacons. Groundtruth beacon positions and robot trajectories were measuredby RTK-GPS. Fig. 5 shows the results, including groundtruth trajectories and landmark positions, and odometry-onlydead reckoning trajectories. We plot trajectory error andtrajectory distribution (shown by 3σ variance) estimated forPlaza1 dataset in Fig. 6. In the middle of the Plaza1 dataset,the robot loses range measurements to all beacons. Fromthe estimated trajectory distribution we see the uncertaintyincreases in response to this missing data.The performance of our approach on SE(2) is comparedwith the naı̈ve sparse linear GP prior [18], which usescanonical coordinates x(t) [x(t), y(t), θ(t)] R3 assystem state. The accuracy and efficiency of both approachesare evaluated by root mean square (RMS) error and optimization time respectively. Table I shows the comparisonresults. Both trajectory and landmark estimation accuraciesare improved by our Lie group approach. Since both ofapproaches share the same sparse factor graph representation,there is no significant difference of efficiency between thesetwo approaches.2 https://bitbucket.org/gtborg/gtsam3 https://github.com/gtrll/gpslam4 https://github.com/gtrll/gpmp2510152025t (s)Fig. 7: Attitude estimation errors by proposed approach ofIMU dataset, compared with gyroscope-only results, andestimated gyroscope bias by proposed approach.B. SO(3): 3D Attitude Estimation of IMUTo evaluate our GP approach on SO(3), we designed anattitude estimation experiment using an inertial measurementunit (IMU). We collected an IMU dataset using a low-costPixhawk autopilot, which has a 3-axis accelerometer and a3-axis gyroscope. Both acceleration and angular rates werecollected asynchronously, as angular rate was available at166Hz but acceleration was available at 40Hz. Ground truthattitude was collected by an Optitrack motion capture system.A major advantage of using continuous-time trajectoryrepresentations is that estimation with asynchronous sensorscan be accomplished simply. We implemented a batch attitude estimation approach with pre-integrated IMU factors[32] containing gyroscope measurements, and GP interpolated acceleration factors which compare acceleration measurements against gravity. After optimization, attitudes areestimated at the gyroscope time stamps. Since the accelerometer does not provide yaw angle information with respect tothe world frame, only pitch and roll angles are estimated andcompared with ground truth.Fig. 7 shows the estimated pitch and roll angle errors fromthe IMU dataset, compared with gyroscope-only estimation,and compared with motion capture ground truth. The resultsshow that both estimated pitch and roll angles are betteraligned with ground truth, since the gyroscope drift from

(a) Google Map view(b) Open loop VO(c) SE(3) pose graph(d) Sim(3) pose graphScale321Sim(3)Sim(3) GP IMU0Key frame(e) Sim(3) with GP and IMU(f) Estimated scalesFig. 8: Scale drift aware monocular SLAM results. (a)Google Map c view of the map with path highlighted; (b)shows open loop VO results, with the loop closure markedin red; (c)-(e) are SE(3) and Sim(3) results, and (f) showsscale estimations of Sim(3) approaches.bias is compensated for by fusing asynchronous accelerationmeasurements and sensor bias is also estimated. The existence of the few peaks in roll angle error plot are due tosingularity caused by the Euler angle representation.C. Sim(3): Scale Drift Aware Monocular SLAMThe transformation of a 3D rigid body is represented bySE(3), and most 3D SLAM approaches consider trajectories on SE(3). In monocular visual SLAM, camera motionand scene structure are recovered up to scale, due to theprojective nature of a single camera. Although the scaleof monocular visual SLAM is locally consistent, the scaleestimate suffers from drift due to the lack of an anchor.Several approaches [24], [25] have been proposed to solvethis issue by estimating the trajectory on Similarity groupSim(3) [24], which is defined by sR tS , R SO(3),(36)01where s is the estimated local scale.We implemented sparse GPs on Sim(3) and conductedmonocular visual SLAM experiments to show how GPs helpwith estimation on Sim(3). We built a hand-held monocularcamera with an IMU, and collected a forward lookingoutdoor dataset as shown in Fig. 8(a). The camera and IMUreadings are also collected asynchronously, similar to th

brief review of Lie group fundamentals. For a full treatment the readers are encouraged to refer [18] for GP regression as trajectory estimation and [27] for Lie group theory. A. Problem Definition We consider the problem of continuous-time trajectory estimation, in which a continuous-time system state x(t) is estimated from observations [14].

Related Documents:

3. A novel sparse GPRN with an on-o process in the mixing matrices leading to sparse and variable-order mixtures of latent signals. 4. A solution to the stochastic variational infer-ence of sparse GPRN where the SVI is derived for the network of full probit-linked covariances. 2 GAUSSIAN PROCESSES We begin by introducing the basics of conventional

CONTENTS CONTENTS Notation and Nomenclature A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1 2 The square root of a matrix (if unique), not elementwise

A Matrix A ij Matrix indexed for some purpose A i Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not .

CONTENTS CONTENTS Notation and Nomenclature A Matrix Aij Matrix indexed for some purpose Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A 1 The inverse matrix of the matrix A A The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1/2 The square root of a matrix (if unique), not elementwise

This section provides the necessary background on sparse matrix formats, their usage in SpMV, and CNN. 2.1 Sparse Matrix Storage Format To efficiently store and process a sparse matrix, compressed data structures (a.k.a. storage formats) are used which store PPoPP '18, February 24-28, 2018, Vienna, Austria &65 &22 &226S09 &656S09 ' GDWD

a key cost, and thereby a system performance bottleneck in many large SpMV computations. C. TAMU Sparse Matrix Collection The TAMU Sparse Matrix Suite Collection [5], is the largest, and the most diverse representation suite of sparse matrices available. It is an actively growing set of sparse matrices that arise in real applications.

approach based on a sparse mixture of sparse Gaussian graphical models (GGMs). Unlike existing fused- and group-lasso-based approaches, each task is represented by a sparse mixture of sparse GGMs, and can handle multi-modalities. We develop a variational inference algorithm combined with a novel sparse mixture weight selection algorithm. To handle

Gaussian filters might not preserve image brightness. 5/25/2010 9 Gaussian Filtering examples Is the kernel a 1D Gaussian kernel?Is the kernel 1 6 1 a 1D Gaussian kernel? Give a suitable integer-value 5 by 5 convolution mask that approximates a Gaussian function with a σof 1.4. .