1y ago

17 Views

2 Downloads

2.31 MB

19 Pages

Transcription

Robotics and Autonomous Systems 56 (2008) onal mobile robot controller based on trajectory linearizationYong Liu a , J. Jim Zhu a, , Robert L. Williams II b , Jianhua Wu ca School of Electrical Engineering and Computer Science, Ohio University, Athens, OH 45701, United Statesb Department of Mechanical Engineering, Ohio University, Athens, OH 45701, United Statesc Ohio Design Center, SEWS Inc. Marysville, OH 43040, United StatesReceived 20 June 2006; received in revised form 27 August 2007; accepted 28 August 2007Available online 24 October 2007AbstractIn this paper, a nonlinear controller design for an omni-directional mobile robot is presented. The robot controller consists of an outer-loop(kinematics) controller and an inner-loop (dynamics) controller, which are both designed using the Trajectory Linearization Control (TLC) methodbased on a nonlinear robot dynamic model. The TLC controller design combines a nonlinear dynamic inversion and a linear time-varying regulatorin a novel way, thereby achieving robust stability and performance along the trajectory without interpolating controller gains. A sensor fusionmethod, which combines the onboard sensor and the vision system data, is employed to provide accurate and reliable robot position and orientationmeasurements, thereby reducing the wheel slippage induced tracking error. A time-varying command filter is employed to reshape an abruptcommand trajectory for control saturation avoidance. The real-time hardware-in-the-loop (HIL) test results show that with a set of fixed controllerdesign parameters, the TLC robot controller is able to follow a large class of 3-degrees-of-freedom (3DOF) trajectory commands accurately.c 2007 Elsevier B.V. All rights reserved.Keywords: Mobile robot; Nonlinear control; Trajectory linearization; Omni-directional; Sensor fusion; Robocup1. IntroductionAn omni-directional mobile robot is a type of holonomicrobots. It has the ability to move simultaneously andindependently in translation and rotation. The inherentagility of the omni-directional mobile robot makes it widelystudied for dynamic environmental applications [1,2,15]. Theannual international Robocup competition in which teamsof autonomous robots compete in soccer-like games, is anexample where the omni-directional mobile robot is used.The Ohio University (OU) Robocup Team’s entry Robocat isfor the Robocup small-size league competition. The current OURobocup robots are Phase V omni-directional mobile robots,as shown in Fig. 1. The Phase V Robocat has three omnidirectional wheels, arranged 120 deg apart. Each wheel isdriven by a DC motor installed with an optical shaft encoder. An Corresponding address: School of Electrical Engineering and ComputerScience, 353 Stocker Center, The Ohio University, Athens, OH 45701, UnitedStates. Tel.: 1 740 597 1506; fax: 1 740 593 0007.E-mail addresses: Yong.Liu.1@ohio.edu, yl542401@ohio.edu (Y. Liu),zhuj@ohio.edu (J.J. Zhu), williar4@ohio.edu (R.L. Williams II),jw322400@yahoo.com (J. Wu).0921-8890/ - see front matter c 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.robot.2007.08.007overhead camera above the field of play senses the position andthe orientation of the robots using a real-time image processingalgorithm and the data are transmitted to the robot through anunreliable wireless communication channel.A precise trajectory tracking control is a key component forapplications of omni-directional robots. The trajectory trackingcontrol of an omni-directional mobile robot can be dividedinto two tasks, path planning and trajectory following [3,5]. Path planning calls for computing a feasible and optimalgeometric path. Optimal trajectory path planning algorithms forthe omni-directional mobile robots are discussed in [3,4,14,16–18]. In [14], the dynamic path planning for omni-directionalrobot is studied considering the robot dynamic constraints. Inthis paper, the main focus is on accurate trajectory followingcontrol, given a feasible trajectory command within the robotphysical limitations. While optimal dynamic path planning isnot within the scope of the paper, an ad hoc yet effectivetime-varying bandwidth command shaping filter is employedfor actuator saturation and integrator windup avoidance, in thepresence of an abrupt command trajectory violating the robot’sdynamic constraints.

462Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479NomenclatureBody framerAngular rate of body rotation (rad/s)u, vVelocity component in the body frame (m/s)f 1 , f 2 , f 3 Traction force of each wheel (N)E 1 , E 2 , E 3 Applied voltage on each motor (V)ωm1 , ωm2 , ωm3 Motor shaft speed (rad/s)World frame(x, y)ΨRobot location (m)Robot orientation angle (rad)Mechanical constantsmIzRLnδRobot mass (kg)Robot moment inertia (kg m2 )Wheel radius (m)Radius of robot body (m)Gear ratioWheel orientation angle (30 ) (deg)The omni-directional robot controller design has beenstudied based on different robot dynamics models. In an earlyversion of Robocat controller [6,14], which is similar to [16],only kinematics are considered in the controller design. Eachmotor is controlled by an individual PID controller to follow thespeed command from inverse kinematics. Without consideringthe coupled nonlinear dynamics explicitly in the controllerdesign, the trial-and-error process of tuning the PID controllergains is tedious [14]. In [3,17,19], kinematics and dynamicsmodels of omni-directional mobile robots have been developed,which include the motor dynamics but ignored the nonlinearcoupling between the rotational and translational velocities.Thus the robot dynamics model is simplified as a linearsystem. In [3,4,17], optimal path planning and control strategieshave been developed for position control without consideringorientation control, and the designed controller was tested insimulations and experiment. In [19], two independent PIDcontrollers are designed for controlling position and orientationseparately based on the simplified linear model. In [6–8,14],a nonlinear dynamic model including the nonlinear couplingterms has been developed. In [7], a resolved-accelerationcontrol with PI and PD feedback has been developed tocontrol the robot speed and orientation angle. It is essentiallya feedback linearization control. That controller design istested on the robot hardware. In [8], based on the samemodel in [7], PID, self tuning PID, and fuzzy control ofomni-directional mobile robots have been studied. In [12], avariable-structure-like nonlinear controller has been developedfor general wheel robot with kinematics disturbance, in whicha globally uniformly ultimately bounded stability (GUUB) isachieved. In [27], feedback linearization control for wheeledmobile robot kinematics has been developed and tested ona two-wheel nonholonomic robot. In [28], a fuzzy trackingFig. 1. Phase V Robocat robot.controller has been developed for a four-wheel differentiallysteered robot without explicit model of the robot dynamics.In this paper first, a detailed nonlinear dynamics modelof the omni-directional robot is presented, in which boththe motor dynamics and robot nonlinear motion dynamicsare considered. Instead of combining the robot kinematicsand dynamics together as in [6–8,14], the robot model isrepresented by the separated kinematics equation and the bodyframe dynamics equation. Such representation facilitates therobot motion analysis and controller design.Second, based on the nonlinear robot model, a 3-degreesof-freedom (3DOF) robot tracking controller design, usinga nonlinear control method, is presented. To simplify thedesign, a two-loop controller architecture is employed basedon the time-scale separation principle and singular perturbationtheory. The outer-loop controller is a kinematics controller.It adjusts the robot position and orientation to follow thecommanded trajectory. The inner-loop is a dynamics controllerwhich follows the body rate command given by the outerloop controller. Both outer-loop and inner-loop controllersemploy a nonlinear control method based on linearization alonga nominal trajectory. It is known as trajectory linearizationcontrol (TLC) [10]. Preliminary results of the proposed robotTLC controller have been summarized in [9]. It is worthnoting that the presented controller can be used as a velocityand orientation controller, which is similar to the controllerstructure in [7]. TLC combines the nonlinear dynamic inversionand linear time-varying eigenstructure assignment in a novelway, and has been successfully applied to missile and reusable

Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479launch vehicle flight control systems [10,11]. The nonlineartracking and decoupling control by trajectory linearization canbe viewed as an ideal gain-scheduling controller designed atevery point on the trajectory. TLC can achieve exponentialstability along the nominal trajectory, therefore it providesrobust stability and performance along the trajectory withoutinterpolation of controller gains. The developed robot TLCcontroller serves as both position controller and trajectoryfollowing control with the same set controller parameters.Compared with the nonlinear controllers in [12,27], theproposed TLC mobile robot controller deals with bothkinematic disturbances (outer-loop controller) and dynamicdisturbance (inner-loop controller). Compared with [12], theTLC controller can achieve robust performance under lessstrict assumptions, while eliminating the chattering controlsignals in [12]. It should be noted that the structure ofTLC is different from another nonlinear control method—feedback linearization control (FLC) [30–32]. In an FLCdesign, a nonlinear dynamic system is transformed to alinear system via a nonlinear coordinate transformation and anonlinear state feedback that cancels the nonlinearity in thetransformed coordinates. Then a linear time-invariant (LTI)controller is designed for the transformed linear system tosatisfy the disturbance and robustness requirements for theoverall system. The FLC relies on the nonlinearity cancellationin the transformed coordinate via nonlinear state feedback.In an actual control system, the cancellation of nonlinearterms will not be exact due to modeling errors, uncertainties,measurement noise and lag, and the existence of parasiticdynamics. The LTI feedback controller designed under thenominal conditions may not effectively handle the nonlineartime-varying residual dynamics.Third, a sensor fusion scheme for robot position andorientation measurements is presented. From real-timeexperiments, it is observed that the accurate position and orientation measurements are essential for the controller performance. On-board sensors, such as motor shaft encoders, canbe used to estimate robot location and orientation by integrating the measured robot velocity. Such estimation is fast, butalso has inevitable cumulative errors introduced by the wheelslippage and the sensor noise. Calibration methods for mobilerobot odometry have been developed to reduce the positionestimation error [25,26]. While these methods have enhancedthe accuracy of odometry position estimation, the estimationdrift cannot be eliminated without an external reference. On theother hand, global position reference sensors, such as a visionsystem using a roof camera senses the robot location and orientation directly without drifting. However, it is relatively slowand sometimes unreliable due to the image processing and communication errors. Thus, a sensor fusion technique is presented,which combines both the global vision system and on-boardsensor estimation to provide an accurate and reliable locationmeasurement. It is based on a nonlinear Kalman filter using trajectory linearization.In Section 2, the omni-directional mobile robot dynamicsmodel is presented. Based on this model, in Section 3, a dualloop robot TLC controller is developed. In Section 4, the sensor463fusion method is described. In Section 5, controller parametertuning and the time-varying bandwidth command shaping filterare discussed. In Section 6, real-time hardware-in-the-loop(HIL) test results are presented.2. The omni-directional mobile robot modelIn this section, the robot equations of motion are derivedbased on some typical simplifying assumptions. It is assumedthat the wheels have no slippage in the direction of tractionforce. Only viscous friction forces on the motor shaft andgear are considered. The wheel contact friction forces thatare not in the direction of traction force are neglected. Themotor electrical time constant is also neglected. The developeddynamics model is similar to those in [6–8,14]. The unmodeledslippage has been studied in [13]. In the controller design, theslippage, as well as other ignored dynamics, are consideredas perturbations to the simplified dynamics model. The closeloop controller based on the simplified model is capable ofcompensating for disturbances and perturbations. Moreover, theslippage of the wheels also introduces measurement when themotor shaft encoder readings are used to estimate the robotposition and orientation. In this paper, such measurement errorsare compensated by the sensor fusion scheme described inSection 4.There are two coordinate frames used in the modeling: thebody frame {B} and the world frame {W}. The body frame isfixed on the moving robot with the origin in the robot geometriccenter, which is assumed to be the center of gravity, as shownin Fig. 2(a). The world frame is fixed on the field of play, asshown in Fig. 2(b). Symbols used in the robot dynamic modelare listed in the nomenclature.From the force analysis shown in Fig. 2(c), in the body framewe have u̇rvf1 v̇ r u H · B f 2 ,(1)ṙ0f3where 1 m H 0 0 0 B 1L01m00 0 ,1 I πZ π coscos π 6 π6 . sinsin66LLFrom the mobile robot geometry, we have uωm1R v (B T ) 1 ωm2 .n ωrm3(2)

464Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479(a) Body frame {B}.(b) World frame {W}.(c) Force analysis.Fig. 2. Coordinate frames and force analysis.The dynamics of each DC motor are described using thefollowing equationsdi a Ra i a k3 ωm E anddtRfJ0 ω̇m b0 ωm k2 i a ,nLawhere E is the applied armature voltage, i a is the armaturecurrent, ωm is the motor shaft speed, L a is the armatureinductance, Ra is the armature resistance, k3 is the back emfconstant, k2 is the motor torque constant, J0 is the combinedinertia of the motor, gear train and wheel referred to the motorshaft, b0 is the viscous-friction coefficient of the motor, gearand wheel combination, R is the wheel radius, f is the wheeltraction force, and n is the motor to wheel gear ratio. Since theelectrical time constant of the motor is very small comparedto the mechanical time constant, we can neglect the motorelectric circuit dynamics, which leads to L a didta 0 and i a 1Ra (E k3 ωm ). With this assumption, and using vector notation,the dynamics of the three identical motors can be written as ω̇m1ωm1f1RJ0 ω̇m2 b0 ωm2 f 2 n fω̇m2ωm33 Eωk2 1 k2 k3 m1 E2 ωm2 . (3)Ra ERa ω3m3By combining (1)–(3), we get the dynamics model of themobile robot in the body frame {B} with the applied motor

Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479465Fig. 3. Robot TLC controller structure.voltages E 1 , E 2 , E 3 as the control input u̇rv v̇ G 1 r u G 1 H B B T k2 · k3 b0Raṙ0 uE1n2 kn2 E2 , 2 v G 1 H BR · Ra ERr(4)32where G (I H B B T nRJ20 ).The kinematics of the robot is given by a coordinatetransformation from the body frame to the world frame ẋcos Ψ (t) sin Ψ (t) 0u ẏ sin Ψ (t) cos Ψ (t) 0 v .(5)001rΨ̇The equations of motion (4) and (5) describe the simplifiedrobot behavior. In this model the friction constant b0 canbe determined experimentally. Eqs. (4) and (5) show thatomni-directional mobile robot has coupled MIMO nonlinear Tdynamics. The nonlinear term G 1 r v r u 0 in (4) isintroduced by robot rotation. The robot nonlinear dynamicscan be reduced to a linear system if either the robot doesnot rotate while in translation, or the robot rotates at a fixedposition without translation. In both the cases, linear controllerscan be applied to the dynamics equation (4), as in [3,17,19].While the kinematics equation (5) can be easily inverted toyield an open-loop kinematics controller, the position trackingerror dynamics are nonetheless coupled, nonlinear and timevarying (when r 6 0), rendering the feedback position trackingerror stabilization control a challenge for LTI controller designtechniques.When linear time-invariant controllers are used for 3DOFcommand trajectories, especially for commands with hightranslational and rotational velocities, the nonlinear robotkinematics and dynamics can no longer be ignored. Forexample, the maximum translational and rotational velocitiesof the Phase V Robocat robot are estimated at 1.18 (m/s) and16.86 (rad/s), respectively [14]. This yields a perturbation inacceleration to the linearized dynamics model [14] as highas supt kG 1 [ r (t)v(t) r (t)u(t) 0 ]T k 2.5 (m/s2 ),which is not accounted for by the linear controller design. Thisperturbation will significantly reduce the domain of stability,or even destabilize the system due to other modeling errorsand disturbances. In order to maintain stability of the robotcontroller, different feedback gains have to be scheduled fordifferent trajectories, or the command trajectories have tobe restricted to separate translation trajectories and rotationtrajectories. The early version of Robocat robot employed alinear controller similar to those in [3,17,19]. It was observedthat the robot lost stability when it was given a commandwith both translational and rotational motions. To achievebetter performance, a nonlinear controller design, which candirectly deal with the intrinsic nonlinearity and the coupling ofrobot motions while tracking any given feasible trajectories, isdesired.3. Trajectory linearization controller designIn the previous Robocat controller design, three independentmotor speed controllers are employed. As for most omnidirectional robots, the open-loop command of each motorcontroller is computed by dynamic inversion of the robotkinematics. However, due to the inevitable errors in the openloop controller, in most experiments the robot cannot follow thedesired trajectories with satisfactory performance.In this section, a controller design based on TrajectoryLinearization Control (TLC) is presented. A two-loopcontroller architecture is employed, as shown in Fig. 3. Theouter-loop controller adjusts the robot position to follow thecommanded trajectory. The inner-loop is a body rate controllerwhich follows the body rate command from the outer-loopcontroller. In both outer-loop and inner-loop controllers, TLCis employed. A TLC controller consists of two parts. The firstpart is an open-loop controller which computes the nominalcontrol and nominal trajectory using a pseudo-dynamic inverseof the plant model. The second part is a feedback controllerwhich stabilizes the system tracking error dynamics alongthe nominal trajectory. The dual-loop design is based on thesingular perturbation theory, commonly known as the timescale separation principle, which assumes that the inner-loopis exponentially stable and the inner-loop’s bandwidth is muchhigher than the outer-loop dynamics, so that the outer-loopcontroller can be designed by ignoring the inner-loop dynamics.This assumption is satisfied by assigning appropriate closedloop PD-eigenvalues [34] to both the control loops. Controllerparameter tuning is discussed in Section 5.1.

466Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479 Z3.1. Outer-loop controller First, from (5), the nominal body rate for a desired trajectory Tx(t) y(t) Ψ (t) is ucos Ψ (t) v sin Ψ (t)r0 ẋ(t)sin Ψ (t) 0cos Ψ (t) 0 ẏ(t) ,01Ψ (t)(6)hiT T arewhere ẋ(t) ẏ(t) Ψ (t)and x(t) y(t) Ψ (t)calculated using a pseudo-differentiator from the command Txcom (t) ycom (t) Ψcom (t) . The general form of a secondorder pseudo-differentiator is illustrated by the followinggeneric state space model for x(t). d x(t)x(t)01 ad1 (t) ad2 (t) ẋ(t)dt ẋ(t) 0 x (t),(7)ad1 (t) comwhere x(t) is a lowpass-filtered xcom (t), and ẋ(t) is theapproximated derivative of xcom (t). For a fixed bandwidth2pseudo-differentiator, ad1 (t) ωn,diff, ad2 (t) 2ζ ωn,diff ,where ζ is the damping ratio; ωn,diff is the natural frequency,which is proportional to the bandwidth of the lowpass filter thatattenuates high-frequency gain, thereby making the pseudodifferentiator causal and realizable. This pseudo-differentiatorcan also be used as an automatic command shaping filterwith time-varying bandwidth. It will be further discussed inSection 5.2.Defining T Te x e y eΨ x(t) y(t) Ψ (t) T x(t) y(t) Ψ (t) T T Tũ ṽ r̃ u v r u(t) v(t) r (t)and linearizing (5) along the nominal trajectories [x(t) y(t)Ψ (t)]T and the nominal input [u(t) v(t) r (t)]T yields the errordynamics ėxexũ ė y A1 (t) e y B1 (t) ṽ (8)ėΨeΨr̃where 0 0 u (t) sin Ψ (t) v (t) cos Ψ (t)A1 (t) 0 0 u (t) cos Ψ (t) v (t) sin Ψ (t) 0 00 cos Ψ (t) sin Ψ (t) 0B1 (t) sin Ψ (t) cos Ψ (t) 0 .001 Now, a proportional-integral (PI) feedback control law isdesigned to stabilize the tracking error. ex (t)dt Z ũex ṽ K P1 e y K I1 e y (t)dt . Z r̃eΨeΨ (t)dt(9)Define the augmented outer-loop tracking error vector by Tγ γ1 γ2 γ3 γ4 γ5 γ6 T ZZZ e y (t)dteΨ (t)dt ex e y eΨ .ex (t)dtThen the closed-loop tracking error state equation can bewritten as O3I3γ̇ A1c γ γ, B1 K I 1 A1 B1 K P1where O3 denotes the 3 3 zero matrix, and I3 denotes the 3 3identity matrix. Now select K I1 and K P1 to achieve the desiredclosed-loop tracking error dynamicsA1c diag a111O3 a121 a131 diag a112I3 a122 a132 ,where, for time-invariant closed-loop dynamics, a1 j1 0,a1 j2 0, j 1, 2, 3, are the coefficients of the desiredclosed-loop characteristic polynomial of each channel given byλ2 a1 j2 λ a1 j1 . It can be verified that K I 1 B1 1 diag a111 a121 a131(10) 1K P1 B1 A1 diag a112 a122 a132 .The body rate command to the inner-loop is given by uũu com vcom v ṽ .rr̃rcom(11)3.2. Inner-loop controllerFrom (4), the nominal motor control input voltages[E 1 (t) E 2 (t) E 3 (t)]T for the nominal body rate [u(t) v(t)r (t)]T are given by E1u̇rvR·RRRaa E 2 (H B) 1 r u G v̇ (H B) 1k2 nk2 n0ṙE3 uT Ra n k 2 k 3 b0 v ,(12) Bk2 RRarwhere [u̇(t) v̇(t) ṙ (t)]T are calculated from [u(t) v(t) r (t)]Tusing the pseudo-differentiator given in (7).Defining T T Teu ev er u v r u(t) v(t) r (t) T T TẼ 1 Ẽ 2 Ẽ 3 E 1 E 2 E 3 E 1 E 2 E 3

Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479and linearizing (4) along the nominal trajectories [u(t) v(t)r (t)]T and the nominal motor control [E 1 E 2 E 3 ]T yields thelinearized inner-loop tracking error dynamics Ẽ 1ėueu ėv A2 (t) ev B2 Ẽ 2 ,(13)ėrerẼ 3where r (t) v (t)0 r (t)0 u (t) A2 (t) G000 2n 1T k2 · k3 G H BB b0RaR2k2 nB2 G 1 H B ·.R · Ra 1 Design the PI feedback control law by Zeu (t)dt ZẼ 1eu Ẽ 2 K P2 ev K I2 ev (t)dt ZerẼ 3er (t)dt(14)and define the augmented inner-loop tracking error vector by Tη η1 η2 η3 η4 η5 η6 ZZZ eu (t)dtev (t)dter (t)dt Teuever.Then the closed-loop tracking error state equation can bewritten as O3I3η̇ A2c η η. B2 K I 2 A2 B2 K P2Now select K I 2 and K P2 to achieve the desired closed-looptracking error dynamicsA2c diag a211O3 a221 a231 diag a212I3 a222 a232 where a2 j1 0, a2 j2 0, j 1, 2, 3, are the coefficientsof the desired (time-invariant) closed-loop characteristicpolynomial of each channel given by λ2 a2 j2 λ a2 j1 . It canbe verified that K I2 B2 1 diag‘ a211 a221 a231(15) 1K P2 B2 A2 diag a212 a222 a232 .Finally, the applied voltage to the motors is given by Ẽ 1E1E1 E 2 E 2 Ẽ 2 .E3E3Ẽ 34674. Position and orientation measurements using sensorfusionIn this section, the sensor fusion method for our omnidirectional mobile robots is briefly described. Detailed designand test results are published in [33]. The sensor fusion methodcombines on-board encoder sensor and the global vision systemmeasurements, thereby providing reliable and accurate positionand orientation measurements. In [29], a similar sensor fusionmethod is developed for mobile robot using encoder, GPSand gyroscope. Different from [29], the sensor fusion methoddeveloped in this section employs a nonlinear Kalman filtertechnique which uses the nominal trajectory generated bythe robot outer-loop controller pseudo-inverse to linearize thenonlinear robot kinematics. A gating technique is also used toremove the corrupted vision measurements.To facilitate applying the Kalman filter algorithm, first therobot kinematics equation (5) is discretized using the forwardEuler method with time-interval T x [k]x [k 1] y [k] y [k 1] Ψ [k]Ψ [k 1] cos Ψ [k 1] · T sin Ψ [k 1] · T 0cos Ψ [k 1] · T0 sin Ψ [k 1] · T00T u [k 1] v [k 1] .(17)r [k 1]The Robocat body rate can be calculated from on-boardsensor (motor encoder). The robot position and orientationcan be determined from the vision system. The body Trate measurement û [k] v̂ [k] r̂ [k] and vision system Tmeasurement z 1 [k] z 2 [k] z 3 [k]at time-step k aredefined as û [k]u [k]w1 [k] v̂ [k] v [k] w2 [k] ,r̂ [k]r [k]w3 [k] (18)z 1 [k]x [k]d1 [k] z 2 [k] y [k] d2 [k] ,z 3 [k]Ψ [k]d3 [k] Twhere w1 [k] w2 [k] w3 [k] is the body rate measurement Tnoise, and d1 [k] d2 [k] d3 [k] is the vision system noise. T TBoth w1 [k] w2 [k] w3 [k] and d1 [k] d2 [k] d3 [k]are assumed to be zero-mean white noise with normaldistribution, such that T N (0, Q [k])p w1 [k] w2 [k] w3 [k] T N (0, R [k]) ,p d1 [k] d2 [k] d3 [k] (16)where Q [k] R3 3 is the body rate measurement covariance,and R [k] R3 3 is the vision system observation noisecovariance.

468Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479Table 5.1Controller parametersOuter-loopζ ωn,diff (rad/s) a111 a121 a131 a112 a122 a132ζωn (rad/s)Pseudo-differentiator damping ratio and natural frequency (Maximum)Closed-loopCharacteristic polynomialClosed-loop damping ratio and natural frequency (Maximum)[0.7 0.7 0.7][8 8 8][16 16 16][7.2 7.2 7.2][0.9 0.9 0.9][4 4 4]Inner-loopζω n,diff (rad/s) a211 a221 a231 a212 a222 a232ζωn (rad/s)Pseudo-differentiator damping ratio and natural frequency (Maximum)Closed-loopCharacteristic polynomialClosed-loop damping ratio and natural frequency (Maximum)In practice, the vision system may lose frames or misidentifythe robot on the field of play. Gating is a technique foreliminating most unlikely outlier measurements [24]. Thereare several commonly used gating algorithms. In this paper, asimple rectangular gate is used. The nonlinear Kalman filteralgorithm for robot location and orientation estimates withgating is summarized below.Step 1: At time-step k, estimate robot position andorientation from the on-board sensor body rate measurement x̂ [k 1]x [k] y [k] ŷ [k 1] Ψ [k] Ψ̂ [k 1] T cos Ψ̂ [k 1] T sin Ψ̂ [k 1] 0 T sin Ψ̂ [k 1] T cos Ψ̂ [k 1] 0 00T bu [k 1]v [k 1] , b(19)br [k 1] Twhere x [k] y [k] Ψ [k] is a priori location estimation.Then calculate the prediction covariance as TP [k] A [k] · P [k 1] · A [k] W [k] · Q [k 1] · W [k]T ,(20)whereA [k] 1 00 0 T sin Ψ [k 1] u [k 1] T cos Ψ [k 1] v [k 1]1T cos Ψ [k 1] u [k 1] T sin Ψ [k 1] v [k 1] 01 T cos Ψ [k 1] T sin Ψ [k 1] 0W [k] T sin Ψ [k 1]T cos Ψ [k 1]0 .00TStep 2: Read the vision system measurement [z 1 [k] z 2 [k] z 3[k]]T . If the vision system data is not available, go to Step 4.If the vision system data is available, calculate the innovationresidue using (21): x [k] ez 1 [k]z 1 [k] ez 2 [k] z 2 [k] y [k] .ez 3 [k]z 3 [k]Ψ [k] The rectangular Gating is defined asq ez i [k] 3 σ R2i σ P2i , i 1, 2, 3,[0.7 0.7 0.7][40 40 40][400 400 400][36 36 36][0.9 0.9 0.9][20 20 20](21)(22)where σ R2 is the diagonal element of the vision system noiseicovariance, and σ P2i is the ith diagonal element of the predictioncovariance P [k]. If all innovation residues satisfy the abovegating condition, the vision system is considered valid, andproceed to Step 3; otherwise, goto Step 4.Step 3: Correction with valid vision data 1K [k] P [k] P [k] R [k](23)P [k] (I K [k]) P [k] . The posterior estimation is x̂ [k]x [k] ez 1 [k] ŷ [k] y [k] K [k] ez 2 [k] .ez 3 [k]Ψ [k] Ψ̂ [k](24)Goto Step 1.Step 4: Calculate the prediction covariance withoutcorrection. x̂ [k]x [k] ŷ [k] y [k] (25)ˆ [k]Ψ [k] PsiP [k] P [k] .(26)Goto Step 1. The estimated measurement x̂ [k] , ŷ [k] , ẑ [k] is used forouter-loop feedback at the time-step k. In (20), A [k] andW [k] are generated by linearizing measurement error alongthe nominal trajectory. The nonlinear Kalman filter employedin this paper is motivated by the nonlinear observer design

Y. Liu et al. / Robotics and Autonomous Systems 56 (2008) 461–479469Fig. 4. Sensor fusion result. (a) Position and orientation estimation comparison. (b) Gating decision.based on trajectory linearization [20]. It is similar to linearizedKalman filter [21] in that both techniques use an open-loopnominal trajectory for linearization. But they differ in thatthe closed-loop TLC controller ensures that the actual robottrajectory stays close to the nominal trajectory. It is differentfrom exte

steered robot without explicit model of the robot dynamics. In this paper ﬁrst, a detailed nonlinear dynamics model of the omni-directional robot is presented, in which both the motor dynamics and robot nonlinear motion dynamics are considered. Instead of combining the robot kinematics and dynamics together as in [6-8,14], the robot model is

Related Documents: