Visual-Inertial-Wheel Odometry With Online Calibration

2y ago
33 Views
2 Downloads
2.14 MB
8 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : Cade Thielen
Transcription

Visual-Inertial-Wheel Odometry with Online CalibrationWoosik Lee, Kevin Eckenhoff, Yulin Yang, Patrick Geneva, and Guoquan HuangI. INTRODUCTIONAutonomous ground vehicles have found their ways intomany applications from autonomous driving and warehousing to military and agriculture robots navigating off-roadrough terrains. High-precision consistent 3D localizationwith low-cost multi-modal sensors (given that engineeredrobotic systems have limited cost budgets) is essential toenable such autonomy. Among all possible sensor suites,cameras, IMUs and wheel encoders are appealing becauseof their ubiquity and low costs while providing sufficientinformation for 3D motion estimation. While visual-inertialodometry (VIO) has witnessed great success in recent yearsand has shown that even a single camera and IMU canprovide accurate 3D motion tracking [1], VIO can sufferfrom unobservability if it undergoes planar motions such asconstant acceleration which is a common case for groundvehicles. Thus, it is necessary to further aid VIO with additional sensors such as wheel encoders (which are typicallycome with wheeled ground vehicles) [2], [3]. As such, inthis work we aim to develop efficient visual-inertial-wheelodometry (VIWO).It is well understood that accurate calibration is a prerequisite for multi-sensor fusion, which is often obtainedoffline and assumed to be constant during operation (e.g.,[2]). However, calibration parameters may change over time,for example, due to mechanical vibration and environmentaleffects, which, if not addressed properly, can hurt the estimation performance (e.g., see Fig. 1). We thus advocateThis work was partially supported by the University of Delaware (UD)College of Engineering, the NSF (IIS-1924897), the ARL (W911NF-19-20226, JWS 10-051-003), and Google ARCore. Yang was partially supportedby the University Doctoral Fellowship and Geneva by the Delaware SpaceGrant College and Fellowship Program (NASA Grant NNX15AI19H).The authors are with the Robot Perception and Navigation Group(RPNG), University of Delaware, Newark, DE 19716, USA. {woosik,keck, yuyang, pgeneva, ghuang}@udel.edu300VIWO w. calibVIWO wo. calibVIOGround truth2001000y-axis (m)Abstract— In this paper, we introduce a novel visual-inertialwheel odometry (VIWO) system for ground vehicles, whichefficiently fuses multi-modal visual, inertial and 2D wheelodometry measurements in a sliding-window filtering fashion.As multi-sensor fusion requires both intrinsic and extrinsic(spatiotemproal) calibration parameters which may vary overtime during terrain navigation, we propose to perform VIWOalong with online sensor calibration of wheel encoders’ intrinsicand extrinsic parameters. To this end, we analytically derive the2D wheel odometry measurement model from the raw wheelencoders’ readings and optimally fuse this 2D relative motioninformation with 3D visual-inertial measurements. Additionally,an observability analysis is performed for the linearized VIWOsystem, which identifies five commonly-seen degenerate motionsfor wheel calibration parameters. The proposed VIWO systemhas been validated extensively in both Monte-Carlo simulations and real-world experiments in large-scale urban 2000200400600x-axis (m)Fig. 1: Simulation results for VIWO with online calibration (blue), VIWOwithout calibration (red), and VIO (yellow). The green square and reddiamond correspond to the start and end of the 8.9km trajectory, respectively.to perform online sensor calibration of both intrinsic andextrinsic parameters in the proposed VIWO.In particular, an efficient, consistent, and tightly-coupledVIWO estimator is developed, which also performs onlinesensor calibration of the wheel encoder’s intrinsics and thespationtemporal extrinsics between the wheel odometer andIMU/camera. Note that IMU/camera calibration has beenwell studied in our prior work [4] and is assumed knownin this work without a loss of generality. Note also that weemploy a differential-drive kinematic model [5], instead ofthe instantaneous center of rotation (ICR)-based model forskid-steering robots as in [3], arguably because it better models the actual kinematics of two-wheel driven (not tracked)vehicles and its intrinsic calibration provides robustness tonon-systematic errors. Specifically, the main contributions ofthis work include: We develop an efficient MSCKF-based VIWO estimatorto optimally fuse IMU, camera, and preintegrated wheelmeasurements, which particularly models the wheelIMU time offset and performs online calibration of bothintrinsic and extrinsic spatiotemproal parameters. We derive a state transition matrix for the MSCKFbased linearized system, which implicitly incorporatesthe stochastic cloning and perform the observabilityanalysis based on it to identify degenerate motions thatcause calibration parameters to become unobservable. We evaluate the proposed VIWO extensively in simulations, showing the calibration convergence, the robustness to poor initial calibration values, and the estimationconsistency. We demonstrate the unobservability of thecalibration parameters when the vehicle undergoes degenerate motions. The proposed VIWO is also validatedin a large-scale urban driving experiment.

II. RELATED WORKRich literature exists on differential drive kinematic models and their offline intrinsic calibration [6]–[9], for example,Kummerle et al. [10] performed online calibration withina graph optimization framework and [11], [12] calibratedsensors within the filtering framework. These methods focuson processing odometry measurements at the sensor rateand thus can suffer computational burdens as the frequencyincreases. In contrast, preintegration of odometry measurements over time create an inferred measurement about thechange of the robot state, and allows update at a lower rate,thus reducing computational burdens. For example, wheelodometry measurements are preintegrated for 2D localizationwith online intrinsic calibration in [13], [14]. Note that in[3], sliding-window optimization for 3D localization of anICR-based skid-steering robot [15], along with online wheelintrinsic calibration was performed. Although the ICR modelencompasses differential drive models as a special case, asthe model incorporates the wheel slippage into the intrinsics,the ICR intrinsic parameters were modeled as a random walk,which requires more parameters to tune and may introducemore degenerate cases.Offline extrinsic calibration of the wheel odometry andother sensors has also been well studied. For example,Antonelli et al. [16] performed relative 3D orientation and2D position calibration between the camera and the odometry frame with known landmarks, while Heng et al. [17]calibrated the wheel and multi-camera without the need ofenvironmental targets. Online calibration approaches alsoexist, for instance, in [18]–[20] graph optimization was usedfor 6DOF spatial calibration. However, all these methods assumed the sensors are time-synchronized. Note that the timeoffset is different from sensor asynchronicity and describesthe disagreement between sensor clocks. As shown in ourprevious work [21], even a small time offset error betweenthe sensors can greatly affect estimation errors. To the bestof our knowledge, no work has yet investigated this timeoffset calibration between wheel odometry and IMU/camera,although significant research efforts in VIO has studied bothonline calibration of IMU/camera time offset [22]–[26] andoffline calibration [27]–[30].Observability analysis [31], [32] is important due to itsability to indicate the minimum states needed to initializean estimator, identify degenerate motions which can affectestimation accuracy [2], [33], and design consistent estimators [34]. Understanding observability properties and degenerate motions is especially essential for ground vehicles dueto their constrained motions that prevent the full excitation ofthe sensor suite. For the differential drive model, Martinelliet al. [35] showed that with camera and wheel sensors,the wheel odometry intrinsic and 2D extrinsic calibrationparameters are observable up to the scale under generalmotion. The author also found a degenerate motion wherethe parameters are not observable when each wheel keeps thesame speed ratio. Censi et al. [36] performed intrinsic and2D extrinsic calibration between wheel and laser sensors andshowed three motion profiles for which the parameters areobservable, but did not provide specific degenerate motions.Other then differential drive model, Zuo et al. [3] analyzedthe observability of intrinsics of skid-steering model withdegenerate motion studies, Yang et al. [4] did comprehensiveobservability studies on extrinsic and time offset parametersbetween a camera and an IMU and provided the degeneratemotion analysis.III. VIO WITH MSCKFIn this section, we briefly review the VIO within the standard MSCKF framework [37], which serve as the baselinefor the proposed VIWO system. Specifically, at time tk , thestate vector xk consists of the current inertial state xIk andn historical IMU pose clones xCk represented in the globalframe {G}: x xk x (1)IkCk Ik G G (2)xIk G q̄pIkvIk b b ga Ik91 G Ik9n G xCk G q̄(3)pIk91 · · · G q̄pIk9nwhere IGk q̄ is the JPL unit quaternion [38] corresponding tothe rotation IGk R from {G} to IMU frame {I}, G pIk andGvIk are the position and velocity of {I} in {G}, and bgand ba are the biases of the gyroscope and accelerometer. Wedefine x x̂ x̃, where x is the true state, x̂ is its estimate,x̃ is the error state, and the operation which maps theerror state vector to its corresponding manifold [39].A. IMU Kinematic ModelThe state is propagated forward in time using the IMU linear acceleration am and angular velocity ωm measurements:am a IG Rg ba na ,ωm ω bg ng(4)where a and ω are the true local acceleration and angularvelocity, g [0 0 9.81] is the global gravity, and na andng are zero mean Gaussian noises. These measurements areused to propagate the state estimate and covariance from timetk to tk 1 based on the standard inertial kinematic modelf (·) [38]:x̂k 1 k f (x̂k k , am , ωm , 0, 0)(5) Pk 1 k Φ(tk 1 , tk )Pk k Φ(tk 1 , tk ) Qk(6)where x̂a b denotes the estimate at time ta formed byprocessing the measurements up to time tb , and Φ and Qare the state transition matrix and discrete noise covariance.B. Camera Measurement ModelSparse corner features are detected and tracked over awindow of images associated with the cloned frames xCk .The resulting bearing measurements, zk , are given by:zk Π(Ck pf ) nk(7)IkGGCpf C(8)I RG R( pf pIk ) pI y x where Π [x y z] z zis the perspective projecCtion, G pf is the 3D point feature position, and CI R and pI1Ckare the camera to IMU extrinsic transformation. Stacking1 Note that in this paper we assume that the camera-IMU extrinsics alongwith the temporal calibration (i.e., the time offset between the two sensors),are known – which however can be calibrated either offline or online [40]– so that we can focus on the online intrinsic and extrinsic calibrations ofthe odometer and IMU in the proposed VIWO system.

{C }{I }Algorithm 1 VIWO Odometry Measurement Update{C }{I }{O}rb{O}Fig. 2: Sensor frames in this paper: IMU frame {I}, camera sensor frame{C}, and odometry frame {O} located at the center of the baselinkall measurements corresponding to a single feature andperforming linear marginalization of the feature’s position(via a null space operation) results in a residual that dependsonly on the state [37]:z̃ck Hxk x̃k nfk(9)This then can be directly used in EKF update without storingfeatures in the state, leading to substantial computationalsavings as the problem size remains bounded over time.IV. WHEEL-ENCODER MEASUREMENT MODELBuilding upon the preceding VIO models, we now generalize our 3D motion tracking system to optimally incorporate2D wheel-encoder measurements that are commonplace inground vehicles. In particular, a ground vehicle is oftendriven by two differential (left and right) wheels mounted ona common axis (baselink), each equipped with an encoderproviding local angular rate readings [5]:ωml ωl nωl ,ωmr ωr nωr(10)where ωl and ωr are the true angular velocities of each wheel,and nωl and nωr are the corresponding zero-mean whiteGaussian noises. These encoder readings can be combinedto provide 2D linear and angular velocities about the vehiclebody or odometer frame {O} at the center of the baselink:Oω (ωr rr ωl rl )/b ,Ov (ωr rr ωl rl )/2(11)where xW I : [rl rr b] are the left and right wheel radiiand the baselink length, respectively.A. Wheel Odometry PreintegrationAs the wheel encoders typically provide measurements ofhigher rate (e.g., 100-500 Hz) than the camera, it would betoo expensive to perform EKF update at their rate. On theother hand, as a sliding window of states corresponding tothe imaging times are stochastically cloned in the state vector[see (1)], we naturally preintegrate the wheel odometry measurements (11) between the two latest camera poses and thenuse this integrated 2D motion measurement for the MSCKFupdate together with the visual feature measurements [see(9)]. As a result, the state vector of our VIWO remains thesame (up to online calibration) as that of the VIO, incurringonly a small extra computational overhead.Consider preintegrating wheel odometry measurementsbetween two clone times tk and tk 1 . The continuous-time2D kinematic model for tτ [tk , tk 1 ] is given by:OτOk θ̇ Oτ ω(12)OkτẋOτ Oτ v cos(OOk θ)(13)OkτẏOτ Oτ v sin(OOk rocedure W HEEL U PDATE(xk 1 k , {ωml , ωmr }k:k 1 )// Preintegrate measurements and Jacobianz 03 1 g x̃W I , Pm 03for ωml(τ ) , ωmr(τ ) {ωml(k:k 1) , ωmr(k:k 1) } doz z z Pm Φtr,τ Pm Φ tr,τ Φn,τ Qτ Φn,τ g g x̃W I Φtr,τ x̃W I ΦW I,τend for// Compute residual and Jacobianz̃ zh h(x̂I , x̂C , x̂W E , O t̂I , x̂W I ) i g h h hH hx̃I x̃C x̃W E x̃W I O t̃I// Perform χ2 test & updateif χ2 (z̃, H, Pm ) P ass thenx̂k 1 k 1 EKF U pdate(x̂k 1 k , z̃, H, Pm )end ifend procedureOkτwhere OxOτ and Ok yOτ are theOk θ is the local yaw angle,2D position of {Oτ } in the starting integration frame {Ok }.Note that this model reveals the fact that the 2D orientationevolves over the integration period. We then integrate thesedifferential equations from tk to tk 1 and obtain the 2Drelative pose measurement as follows: Z tk 1Otωdt Ok 1 Z t tk k 1 OθOtt Okθ)dtvcos((15)zk 1 Ok Ok dOk 1 Ztktk 1OOtv sin(Otk θ)dttk : g(ωl(k:k 1) , ωr(k:k 1) , xW I )(16)where ω(k:k 1) denote all the wheel measurements integrated tk to tk 1 . If both extrinsic and time offset (spatiotemporal) calibration parameters between the odometerand IMU/camera are perfectly known, the above integratedodometry measurements can be readily used in the MSCKFupdate as in [2]. However, in practice, this often is not thecase, for example, due to inaccurate prior calibration or mechanical vibration. To cope with possible time-varying calibration parameters during terrain navigation, the proposedVIWO performs online calibration of the wheel-encoders’ O intrinsics xW I , and the extrinsics xW E [OpI ]I q̄Oand time offset tI between the odometer and IMU. Noteagain that the IMU and camera are assumed to be calibratedand synchronized for presentation brevity. To this end, weaugment the state vector (1) with these parameters: Ox x tI x xk x (17)IkCkWEWIIn what follows, we will derive in detail the relation betweenthe preintegrated wheel odometry measurements (16) and theaugmented state (17) by properly taking into account theintrinsic/extrinsic calibration parameters:zk 1 h(xIk 1 , xCk 1 , xW E , O tI , xW I )(18)B. Odometry Measurement wrt. IntrinsicsAs evident from (11) and (16), the wheel-odometry integration entangles the intrinsics xW I , and ideally we should

re-integrate these measurements whenever a new estimate ofintrinsics is available, which however negates the computational efficiency of preintegration. To address this issue, welinearize the preintegrated odometry measurements about thecurrent estimate of the intrinsics while properly taking intoaccount the measurement uncertainty due to the linearizationerrors of the intrinsics and the noise [see (16)]:zk 1 ' g(ωml(k:k 1) , ωmr(k:k 1) , x̂W I ) g gx̃W I nw x̃W I nw(19)where nω is the stacked noise vector whose τ -th block iscorresponding to the encoder measurement noise at tτ [tk , tk 1 ] (i.e., [nωl,τ nωr,τ ] ) [see (10)].Clearly, performing EKF update with this measurementrequires the Jacobians with respect to both the intrinsicsand the noise in (19). It is important to note that as thepreintegration of g(·) is computed incrementally using theencoders’ measurements in the interval [tk , tk 1 ], we accordingly calculate the measurement Jacobians incrementally onestep at a time. Note also that since the noise Jacobian and nωare often of high dimensions and may be computationallyexpensive when computing the stacked noise covarianceduring the update, we instead compute the noise covarianceby performing small matrix operations at each step.Specifically, linearization of (15) at time tτ yields thefollowing recursive equations:Oτ 1Ok θ̃Okτ OOk θ̃ H1,τ x̃W I H2,τ nω,τx̃Oτ 1 Okτx̃Oτ H3,τ OOk θ̃ H4,τ x̃W IOkτỹOτ H6,τ OOk θ̃ H7,τ x̃W IOk(20) H5,τ nω,τ (21) H8,τ nω,τ (22)ỹOτ 1 where the intermediate Jacobians, Hl,τ , l 1 . . . 8, can befound in our companion technical report [41]. With the aboveequations, we can recursively compute the noise covarianceτ 1Pm,τ 1 and the Jacobian g x̃W I as follows: 1Φtr,τ H3,τH6,τPm,τ 1 0 01 0 , ΦW I,τ 0 1Φtr,τ Pm,τ Φ tr,τ H1,τ H4,τ , Φn,τ H7,τΦn,τ Qτ Φ n,τ gτ 1 gτ Φtr,τ ΦW I,τ x̃W I x̃W I H2,τ H5,τ (23)H8,τ(24)(25)where Qτ is the noise covariance of wheel encoder measurement at tτ . These equations show how the Jacobianand the noise covariance evolve during the preintegrationinterval. We thus can recursively compute measurement noisecovariance Pm and the Jacobian matrix x̃ gat the end ofWIpreintegration tk 1 , based on the zero initial condition (i.e.,0Pm,0 , gx̃W I 03 ).C. Odometry Measurement wrt. Extrinsics1) Spatial calibration: Note that the preintegrated wheelmeasurement (16) provides only the 2D relative motion onthe odometer’s plane, while the VIWO state vector (17)contains the 3D IMU/camera poses. In order to establishthe connection of the preintegrated odometry with the state,clearly the relative transformation (extrinsic calibration) between the IMU and the odometer is required [see (15)]: Ok 1 O Ik 1 Ik O e RG R I R )3 Log(I RGOk θ O Ik G(26)IGOOkΛI RG R( pIk 1 GdOk 1Ik 1 R pO pIk ) pIwhere Λ [e1 e2 ] , ei is the i-th standard unit basis vector,and Log(·) is the SO(3) matrix logarithm function [42]. Asthis measurement depends on the two consecutive poses aswell as the odometer/IMU extrinsics, when updating withit in the MSCKF, the corresponding measurement Jacobiansare needed and computed as follows [see (26) and (18)]:"# he3 O01 301 9I R̂ (27)IkIO Ik ΛO x̃Ik 1I R̂Ik 1 R̂b p̂O c ΛI R̂G R̂ 02 9"#I h e O R̂ k 1 R̂01 3 O Ik 3 GI Ik G(28)Ik x̃Ck 1ΛI R̂bG R̂( p̂Ok 1 p̂Ik )c ΛOI R̂G R̂"#Ok 1 he 01 33 (I Ok R̂) (29)OkOkIkO x̃W EΛ(bOI R̂ p̂Ok 1 c Ok 1 R̂b p̂I c) Λ(I Ok 1 R̂)where b·c is the skew symmetric matrix.2) Temporal calibration: To account for the differencebetween sensor clocks and measurement delay, we model anunknown constant time offset between the IMU clock and theodometer clock:2 I tk O tk O tI , where I tk and O tk arethe times when measurement zk was collected in the IMUand odometer’s clocks, and O tI is the time offset betweenthe two time references. Consider that we want to derivepreintegrated odometry constraints between two cloned statesat the true IMU times I tk and I tk 1 . Using the current bestestimate of the time offset O t̂I , we can integrate our wheelencoder measurements between the odometer times O tk Itk O t̂I and O tk 1 I tk 1 O t̂I , whose correspondingtimes in the IMU clock are:I 0tkI 0tk 1: I tk O t̂I O tIIO I tk O t̃IOIO: tk 1 t̂I tI tk 1 t̃I(30)(31)After preintegration we have the 2D relative pose measurement between the times I t0k and I t0k 1 while the corresponding states are at the times I tk and I tk 1 . To update with thismeasurement, we employ the following first-order approximation by accounting the time-offset estimation error:I(I t0k )RGGpI(I t0k )I(tk ) (I bI(tk ) ω O t̃I c)GGGO pI(tk ) vI(tk ) t̃IR(32)(33)Using this relationship, we can compute the measurementJacobian of the time offset as follows [see (26)]:#"IOIk 1 he ω Ik 1R̂Ik ω)3 I R̂(k (34)IkIIk 1IkIkΛOω Ik v̂Ik 1 ) O t̃II R̂(b p̂Ok 1 c ω Ik 1 R̂b p̂O cNote that in our experiments we use the IMU angularrate measurement and the current estimate of velocity incomputing the above Jacobian.D. Odometry Measurement UpdateAt this point, we have obtained the preintegrated wheelodometry measurements along with their corresponding Jacobians which are readily used for the MSCKF update:z̃k 1 : g(ωml(k:k 1) , ωmr(k:k 1) , x̂W I ) h(x̂I , x̂C , x̂W E , O t̂I )hi g g h h h hOx̃I x̃C x̃W E x̃W I t̃I x̃k 1 n nω (35)ω {z}Hk 12 We assume that the two wheel encoders are hardware synchronized andthus their readings have the same timestamps.

Note that similar to how we treat visual features, we alsoemploy the Mahalanobis distance test to reject bad preintegrated odometry measurements (which can be due to someunmodelled errors such as slippage) and only those passingthe χ2 test will be used for EKF update. To summarize, themain steps of the odometry measurement update are outlinedin Algorithm 1.the IMU pose of x̃k into x̃k 1 as a cloned pose withoutchanging its value, while the cloned state in x̃k has beendiscarded (marginalized). The above operation clearly revealsthe MSCKF cloning process and thus, we will leverage thislinear system (39) for the ensuing analysis.Specifically, during the time interval [t0 , tk 1 ], we havethe following linear dynamic system:V. O BSERVABILITY A NALYSISAs system observability plays an important role for stateestimation [31], [32], we perform the observability analysisto gain insights about the state/parameter identifiability forthe proposed VIWO. For brevity, in the following we onlypresent the key results of our analysis while the detailedanalysis can be found in the companion technical report [41].For the analysis purpose, in analogy to [43], we considerthe following state vector which includes a single clonedpose of xIk 1 and a single 3D point feature G pf : O G tI x pf (36)xk x WIIk xCk xetc , xetc xW Ex̃k 1 Ξ(tk 1 , tk )Ξ(tk , tk 1 ) · · · Ξ(t1 , t0 ) x̃0 {z}The observability matrix for the linearized system is: M H 0(H1 Φ(1,0) ) .(Hk Φ(k,0) ) where Φ(k,0) is the state transition matrix which is notobvious when including the clone in the state vector andwill be derived below, and Hk is the measurement Jacobianat time step k (e.g., see (35)). If we can find matrix N thatsatisfies MN 0, the basis of N indicate the unobservabledirections of the linearized system.A. State Transition MatrixIn the MSCKF-based linearized system, the state transitionmatrix corresponding to the cloned state essentially revealsthe stochastic cloning process. To see this, first recall howthe cloned states are processed [44]: (i) augment the statewith the current IMU pose when a new image is available,(ii) propagate the cloned pose with zero dynamics, (iii)marginalize the oldest clone after update if reaching themaximum size of the sliding window. In the case of oneclone, this cloning corresponds to the following operation(while marginalization in the covariance form is trivial):I6 09 6 I x̂k k 6013 606 9I906 9013 90609 606013 6 06 1309 13 x̂06 13 k kI13(38)One may model this state transition by including all theclones (with zero dynamics) in the state vector. This maycomplicate the analysis as it needs to explicitly include all theclone constraints, while we here construct the state transitionmatrix with implicit clone constraints.We discover that cloning and propagating the current errorstate x̃k can be unified by the following linear mapping:x̃k 1 ΦI11 (k 1,k) ΦI (k 1,k) 21 I6013 6ΦI12 (k 1,k)ΦI22 (k 1,k)06 9013 9 {zΞ(k 1,k) ΦI11ΦI21 0609 606013 6 06 1309 13 x̃06 13 kI13(39)}ΦIwhere ΦI is the IMU state transition matrixΦI(see [43]). Note that I6 in the third block row copies1222Ξ(tk 1 ,t0 )Ξ(k 1,0) ΦI11 (k 1,0) ΦI12 (k 1,0) 06 06 13 ΦI (k 1,0) ΦI (k 1,0) 09 6 09 13 22 21 Ψ(k 1,0)06 906 06 13 013 6013 9013 6 I13(41)We also enforce the constraint that the initial IMU pose andthe clone state at time t0 are identical, whose error stateshave the following constraint: I6 06 9 x̃I0 x̃C0(42)With (40) and (42), we finally have the following statetransition matrix Φ(k 1,0) for our observability analysis: (37)(40) ΦI(k 1,0)x̃Ik 1 x̃Ck 1 06 9x̃etc013 15 015 6Ψ(k 1,0)013 6{zΦ(k 1,0) 015 13x̃I006 13 x̃C0 x̃etcI13}(43)B. Observability PropertiesBased on the measurement Jacobians and state transitionmatrix [see (9), (35) and (43)], we are able to constructthe observability matrix (37) of the MSCKF-based linearizedsystem under consideration. Careful inspection of this matrixreveals the following results (see [41] for details):Lemma 5.1: The proposed VIWO has the following observability properties: With general motions, there are four unobservable directions corresponding to the global position and theyaw angle as in VINS [43]. As matrix blocks of the observability matrix that arerelated to the wheel-encoder’s extrinsic and intrinsiccalibration parameters highly depend on dynamic statesand encoder’s readings, these blocks can be full-rankgiven sufficient motions, implying that these parametersare observable, which has been validated in simulationsin terms of convergence (see Fig. 3). We identify the following five degenerate motions thatcause the odometer’s calibration parameters to becomeunobservable, which might be commonly seen forground vehicles:MotionUnobservablePure translation1-axis rotationConstant angular and linear velocityNo left/right wheel velocityNo motionOp , bIOpIOtIrl / rrO R, O p ,IIrl , rr , bVI. S IMULATION VALIDATIONSThe proposed VIWO was implemented within the OpenVINS [45] framework which provides both simulation andevaluation utilities. We expanded the visual-inertial simulator

Fig. 3: Calibration errors of each parameter (solid) and 3σ bound (dotted) for six different runs under random motion. Each colors denote runs withdifferent realization of the measurement noise and the initial values. Both the estimation errors and 3σ bounds can converge in about 10 seconds.Fig. 4: Calibration error results of each parameter under planar motion. Each colors denote runs with different realization of the measurement noise andthe initial values. Both the estimation errors (solid) and 3σ (dotted) bounds are reported.TABLE I: Simulation parameters and prior single standard deviations thatperturbations of measurements and initial states were drawn from.ParameterValueParameterValueCam Freq. (hz)Wheel Freq. (hz)Max. FeatsPixel Proj. (px)Gyro. White NoiseAccel. White NoiseWheel Ext (Ori). Ptrb.Wheel Int. Ptrb.105020011.0e-041.0e-041.0e-021.0e-02IMU Freq. (hz)Num. ClonesFeat. Rep.Wheel. White NoiseGyro. Rand. WalkAccel. Rand. WalkWheel Ext (Pos). Ptrb.Wheel Toff. 2TABLE II: Relaive pose error (RPE) of each algorithm (degree/meter).50mVIOtrue w. caltrue wo. calbad w. calbad wo. /3.9301.5731.1251.5263.367NEES3.921 / 3.8951.952 / 2.0201.698 / 1.4731.943 / 1.82659.678 / 183.538to additionally simulate a differential drive robot which canonly have velocity in the local x-direction (non-holonomicconstraint [5]) and have listed the key simulation parametersin Table I. In order to validate the proposed calibrationmethod, we tested VIWO on a large-scale neighborhoodsimulation dataset with four different combinations: with“bad” or “true” initial calibration parameters and with orwithout online calibration. The bad initial values were drawnfrom the prior distributions specified in Table I. 50m, 100m,and 200m relative pose error (RPE, [46]) and the averagenormalized estimation error squared (NEES) results of thefour configurations and standard VIO for comparison areshown in Table II. VIWO with “bad” initial values withcalibration showed similar RPE results compare to thosehad “true” initial values, while VIWO with “bad” initialFig. 5: Intrinsic calibration error under straight line motionFig. 6: Time offset calibration error under straight line motionvalues without calibration has a very poor estimation performance and a large NEES due to treating the incorrectcalibration as being true. Fig. 1 shows the trajectory of thealgorithms, and clearly shows the failure of estimation whennot performing calibration. We found that the system wasparticularly sensitive to the wheel intrinsics and even withsmall perturbations the system without online calibrationprovided unusable estimation results.

TABLE III: Values of calibration parameters before/after calibration1000800600VIWO w. calibVIWO wo. calibVIOWheel OdometryGround truthy-axis (m)400ParameterBeforeAfterleft wheel radiusright wheel radiusbase lengthExt. PosExt. OriTime offset0.3117400.3114031.52439[-0.070, 0.000, 1.400][0.000, 0.000, 0.000]0.000000.3122620.3118431.53201[-0.062, 0.003, 1.384][0.000, 0.001, -0.002]-0.027232000-200-400-600-50005001000x-axis (m)Fig. 7: urban39 results of VIWO with calibration (blue), VIWO withoutcalibration (red), VIO (ye

Visual-Inertial-Wheel Odometry with Online Calibration Woosik Lee, Kevin Eckenhoff, Yulin Yang, Patrick Geneva, and Guoquan Huang Abstract—In this paper, we introduce a novel visual-inertial-wheel odometry (VIWO) system for ground vehicles, which efficiently fuses multi-modal visual, inertial and 2D wheel

Related Documents:

Visual Inertial Navigation Short Tutorial Stergios Roumeliotis University of Minnesota. Outline . "Visual-inertial navigation: A concise review," IRA'19. Introduction Visual Inertial Navigation Systems (VINS) combine camera and IMU . Continuous-time System Equations: Quaternion of orientation: Rotation matrix: Position: Velocity

Specifically we apply deep learning to develop a novel tech-nique for laser-based odometry, the use of a laser scanner to estimate the change in a robot's position over time. The work presented in this paper provides a method that leverages the benefits of deep learning to estimate odometry directly from high fidelity sensors.

for detecting 3D points, lines and planes, since our goal is to implement our approach on small mobile robots, such as drones. I. INTRODUCTION Odometry and SLAM are critical for robots to navigate in previously unseen environments. Although various solu-ti

dhamma-cakka, the wheel of law of the Buddha, and urasi-cakka, the wheel of torture. [1] To this list Gurulugomi [2] adds praharaṇa-cakra, the discus, asani-cakka, the wheel of thunderbolt, dāru-cakka, the wheel-right’s wooden wheel, and saṃsāra-cakka, the Wheel of Life. The last mentioned wheel is also known as bhava-cakka, the Wheel .

fifth wheel king pin jost kz1006 kz1012-01 20.10001 fifth wheel lock claw jost sk1489 sk1489z 20.10002 fifth wheel lock claw jost sk1489zs 20.10021 fifth wheel lock jaw jost sk210515 20.10026 fifth wheel lock jaw jost sk240514 20.10027 fifth wheel lock jaw jost sk240514 20.10028 fifth wheel lock jaw jost sk240514z 20.10003 fifth wheel lock ring .

Camera motion estimation - Understand the camera as a sensor - What information in the image is particularly useful - Estimate camera 6(5)DoF using 2 images: Visual Odometry (VO) After all, it's what nature uses, too! Cellphone processor unit 1.7GHz quadcore ARM 10g Cellphone type camera, up to 16Mp (480MB/s @ 30Hz) "monocular vision"

pose system, such a measurement could be used to update the pose and slip model simultaneously. The delayed state formulation enables future investigations into calibrating slip using visual odometry, less accurate GPS and perhaps even just the integrated inertial navigation system. In the following sections, we present an approach to

2013 BERKELEY COUNTY SCHOOL 2ND & 3RD GRADE WRITING FOLDER 3 2nd grade Second Semester - Pen Pals.38