Unscented Filtering And Nonlinear Estimation

1y ago
18 Views
2 Downloads
637.07 KB
22 Pages
Last View : 21d ago
Last Download : 3m ago
Upload by : Julius Prosser
Transcription

Unscented Filtering and Nonlinear EstimationSIMON J. JULIER, MEMBER, IEEE, AND JEFFREY K. UHLMANN, MEMBER, IEEEInvited PaperThe extended Kalman filter (EKF) is probably the most widelyused estimation algorithm for nonlinear systems. However, morethan 35 years of experience in the estimation community has shownthat is difficult to implement, difficult to tune, and only reliable forsystems that are almost linear on the time scale of the updates. Manyof these difficulties arise from its use of linearization. To overcomethis limitation, the unscented transformation (UT) was developed asa method to propagate mean and covariance information throughnonlinear transformations. It is more accurate, easier to implement,and uses the same order of calculations as linearization. This paperreviews the motivation, development, use, and implications of theUT.Keywords—Estimation, Kalman filtering, nonlinear systems,target tracking.I. INTRODUCTIONThis paper considers the problem of applying the Kalmanfilter (KF) to nonlinear systems. Estimation in nonlinearsystems is extremely important because almost all practicalsystems—from target tracking [1] to vehicle navigation,from chemical process plant control [2] to dialysis machines—involve nonlinearities of one kind or another.Accurately estimating the state of such systems is extremelyimportant for fault detection and control applications. However, estimation in nonlinear systems is extremely difficult.The optimal (Bayesian) solution to the problem requires thepropagation of the description of the full probability densityfunction (pdf) [3]. This solution is extremely general andincorporates aspects such as multimodality, asymmetries,discontinuities. However, because the form of the pdf is notrestricted, it cannot, in general, be described using a finitenumber of parameters. Therefore, any practical estimatormust use an approximation of some kind. Many differenttypes of approximations have been developed; unfortunately,Manuscript received March 14, 2003; revised November 27, 2003.S. J. Julier is with IDAK Industries, Jefferson City, MO 65109 USA(e-mail: sjulier@idak.com).J. K. Uhlmann is with the Department of Computer Engineering andComputer Science, University of Missouri–Columbia, Columbia, MO65211 USA (e-mail: uhlmannj@missouri.edu).Digital Object Identifier 10.1109/JPROC.2003.823141most are either computationally unmanageable or requirespecial assumptions about the form of the process andobservation models that cannot be satisfied in practice. Forthese and other reasons, the KF remains the most widelyused estimation algorithm.The KF only utilizes the first two moments of the state(mean and covariance) in its update rule. Although this is arelatively simple state representation, it offers a number ofimportant practical benefits.1) The mean and covariance of an unknown distributionrequires the maintenance of only a small and constantamount of information, but that information is sufficient to support most kinds of operational activities(e.g., defining a validation gate for a search region fora target). Thus, it is a successful compromise betweencomputational complexity and representational flexibility. By contrast, the complete characterization of anevolving error distribution requires the maintenance ofan unbounded number of parameters. Even if it werepossible to maintain complete pdf information, that information may not be operationally useful (e.g., because the exploitation of the information is itself anintractable problem).2) The mean and covariance (or its square root) arelinearly transformable quantities. For example, if anand covariance,error distribution has meanthe mean and covariance of the distribution after ithas undergone the linear transformation is simplyand. In other words, mean and covariance estimates can be maintained effectively whensubjected to linear and quasilinear transformations.Similar results do not hold for other nonzero momentsof a distribution.3) Sets of mean and covariance estimates can beused to characterize additional features of distribution, e.g., significant modes. Multimodal trackingmethods based on the maintenance of multiple meanand covariance estimates include multiple-hypothesis tracking [4], sum-of-Gaussian filters [5], andRao–Blackwellized particle filters [6].0018-9219/04 20.00 2004 IEEEPROCEEDINGS OF THE IEEE, VOL. 92, NO. 3, MARCH 2004401

The most common application of the KF to nonlinearsystems is in the form of the extended KF (EKF) [7],[8]. Exploiting the assumption that all transformationsare quasi-linear, the EKF simply linearizes all nonlineartransformations and substitutes Jacobian matrices for thelinear transformations in the KF equations. Although theEKF maintains the elegant and computationally efficientrecursive update form of the KF, it suffers a number ofserious limitations.1) Linearized transformations are only reliable if theerror propagation can be well approximated by alinear function. If this condition does not hold, thelinearized approximation can be extremely poor.At best, this undermines the performance of thefilter. At worst, it causes its estimates to divergealtogether. However, determining the validity of thisassumption is extremely difficult because it dependson the transformation, the current state estimate, andthe magnitude of the covariance. This problem iswell documented in many applications such as theestimation of ballistic parameters of missiles [1],[9]–[12] and computer vision [13]. In Section II-C weillustrate its impact on the near-ubiquitous nonlineartransformation from polar to Cartesian coordinates.2) Linearization can be applied only if the Jacobian matrix exists. However, this is not always the case. Somesystems contain discontinuities (for example, theprocess model might be jump-linear [14], in which theparameters can change abruptly, or the sensor mightreturn highly quantized sensor measurements [15]),others have singularities (for example, perspectiveprojection equations [16]), and in others the statesthemselves are inherently discrete (e.g., a rule-basedsystem for predicting the evasive behavior of a pilotedaircraft [17]).3) Calculating Jacobian matrices can be a very difficultand error-prone process. The Jacobian equationsfrequently produce many pages of dense algebra thatmust be converted to code (e.g., see the Appendixto [18]). This introduces numerous opportunities forhuman coding errors that may undermine the performance of the final system in a manner that cannot beeasily identified and debugged—especially given thefact that it is difficult to know what quality of performance to expect. Regardless of whether the obscurecode associated with a linearized transformation isor is not correct, it presents a serious problem forsubsequent users who must validate it for use in anyhigh integrity system.The unscented transformation (UT) was developed to address the deficiencies of linearization by providing a moredirect and explicit mechanism for transforming mean and covariance information. In this paper we describe the generalUT mechanism along with a variety of special formulationsthat can be tailored to the specific requirements of differentnonlinear filtering and control applications. The structure ofthis paper is as follows. Section II reviews the relationship402between the KF and the EKF for nonlinear systems and motivates the development of the UT. An overview of the UT isprovided in Section III and some of its properties are discussed. Section IV discusses the algorithm in more detailand some practical implementation considerations are considered in Section V. Section VI describes how the transformation can be applied within the KF’s recursive structure.Section VII considers the implications of the UT. Summaryand conclusions are given in Section VIII. The paper also includes a comprehensive series of Appendixes which providedetailed analyses of the performance properties of the UTand further extensions to the algorithm.II. PROBLEM STATEMENTA. Applying the KF to Nonlinear SystemsMany textbooks derive the KF as an application ofBayes’ rule under the assumption that all estimates haveindependent, Gaussian-distributed errors. This has led toa common misconception that the KF can only be strictlyapplied under Gaussianity assumptions. However, Kalman’soriginal derivation did not apply Bayes’ rule and does notrequire the exploitation of any specific error distributioninformaton beyond the mean and covariance [19].To understand the the limitations of the EKF, it is necessary to consider the KF recursion equations. Suppose that theis described by the meanestimate at time stepand covariance. It is assumed that this is consistent inthe sense that [7](1)whereis the estimation error.1The KF consists of two steps: prediction followed by update. In the prediction step, the filter propagates the estimateto the current time step .from a previous time stepThe prediction is given byThe update (or measurement update) can be derived as thelinear minimum mean-squared error estimator [20]. Giventhat the mean is to be updated by the linear rulethe weight (gain) matrixof the updated covarianceis chosen to minimize the trace. Its value is calculated fromwhereis the cross covariance between the error inand the error inandis the covariance of .1A conservative estimate replaces the inequality with a strictly greaterthan relationship.PROCEEDINGS OF THE IEEE, VOL. 92, NO. 3, MARCH 2004

Using this weight, the updated covariance isand the fact thathas been used.However, the full Taylor series expansion of this function, given in Appendix I, shows that these quantities containhigher order terms that are a function of the statistics of andhigher derivatives of the nonlinear transformation. In somesituations, these terms have negligible effects. In other situations, however, they can significantly degrade estimator performance. One dramatic and practically important example isthe transformation from polar to Cartesian coordinates [12].whereTherefore, the KF update equations can be applied if several sets of expectations can be calculated. These are the pre, the predicted observationdicted stateand the cross covariance between the prediction and the ob. When all of the system equations are linear,servationdirect substitution into the above equations gives the familiarlinear KF equations. When the system is nonlinear, methodsfor approximating these quantities must be used. Therefore,the problem of applying the KF to a nonlinear system becomes one of applying nonlinear transformations to meanand covariance estimates.B. Propagating Means and Covariances ThroughNonlinear TransformationsConsider the following problem. A random variable has. A second random variable, , ismean and covariancerelated to through to the nonlinear transformationis the Jacobian ofC. Polar to Cartesian Coordinate TransformationsOne of the most important and ubiquitous transformationsis the conversion from polar to Cartesian coordinates. Thistransformation forms the foundation for the observationmodels of many sensors, from radar to laser range finders.in its local coordiA sensor returns polar informationpositionnate frame that has to be converted into anestimate of the target position in some global Cartesiancoordinate frame(2)(7)The problem is to calculate a consistent estimate of withmean and covariance. Both the prediction and updatesteps of a KF can be written in this form.2Taking the multidimensional Taylor series expansionProblems arise when the bearing error is significant. As anexample, a range-optimized sonar sensor can provide fairlygood measurements of range (2-cm standard deviation) butextremely poor measurements of bearing (standard deviationof 15 ) [21]. The effects of the large error variances on thenonlinearly transformed estimate are shown in Fig. 1, whichshows the results for a target whose true position is (0, 1).samples. These wereFig. 1(a) plots several hundredderived by taking the actualvalue of the target location,adding Gaussian zero-mean noise terms to each component,using (7). As can be seen, theand then converting topoints lie on a “banana”-shaped arc. The range error causesthe points to lie in a band, and the bearing error causes thisregion to be stretched around the circumference of a circle.but is actuallyAs a result, the mean does not lie atlocated closer to the origin. This is confirmed in Fig. 1(b),which compares the mean and covariance of the convertedcoordinates using Monte Carlo sampling and linearization.contours calculated by each method.The figure plots theThecontour is the locus of pointsand is a graphical representation of the size and.orientation ofCompared to the “true” result, the linearized estimate isbiased and inconsistent. This is most evident in the direction. The linearized mean is at 1.0 m but the true mean is at96.7 cm. Because it is a bias that arises from the transformation process itself, the same error with the same sign willbe committed every time a coordinate transformation takesplace. Even if there were no bias, the transformation wouldstill be inconsistent because it underestimates the variance inthe component.3(3)where theoperator evaluates the total differential ofwhen perturbed around a nominal value by . Theth term in the Taylor series foris(4)where is the th component of . Therefore, the th termin the series is an th-order polynomial in the coefficients of, whose coefficients are given by derivatives of.In Appendix I we derive the full expression for the meanand covariance using this series. Linearization assumes thatall second and higher order terms in the Taylor series expansion are negligible, i.e.,Taking outer products and expectations, and by exploiting theassumption that the estimation error is approximately zeromean, the mean and covariance are(5)(6); z h[ 1 ] f [ 1 ]. The update step corresponds to the case when x ; and h[ 1 ] g[ 1 ].y2The prediction corresponds to the case when x ; and;z 3It could be argued that these errors arise because the measurement errors are unreasonably large. However, Lerro [12] demonstrated that, in radartracking applications, the transformations can become inconsistent when thestandard deviation of the bearing measurement is less than a degree.JULIER AND UHLMANN: UNSCENTED FILTERING AND NONLINEAR ESTIMATION403

(a)(b)(b)Fig. 1. The true nonlinear transformation and the statistics calculated by Monte Carlo analysis andlinearization. Note that the scaling in the x and y axes are different. (a) Monte Carlo samples fromthe transformation and the mean calculated through linearization. (b) Results from linearization.The true mean is at and the uncertainty ellipse is solid. Linearization calculates the meanat and the uncertainty ellipse is dashed. 2There are several strategies that could be used to addressthis problem. The most common approach is to apply linearization and “tune” the observation covariance by paddingit with a sufficiently large positive definite matrix that thetransformed estimate becomes consistent.4 However, this approach may unnecessarily increase the assumed uncertaintyin some directions in the state space, and it does nothing toaddress the problem of the bias. A second approach wouldbe to perform a detailed analysis of the transformations and4This process is sometimes euphemistically referred to as “injecting stabilizing noise” [22].404derive precise closed-form solutions for the transformedmean and covariance under specific distribution assumptions. In the case of the polar-to-Cartesian transformation ofan assumed Gaussian distributed observation estimate, suchclosed-form solutions do exist [12], [23]. However, exactsolutions can only be derived in special cases with highlyspecific assumptions. A slightly more general approach is tonote that linearization errors arise from an implicit truncation of the Taylor series description of the true transformedestimate. Therefore, maintaining higher order terms maylead to better results. One of the first to attempt this wasPROCEEDINGS OF THE IEEE, VOL. 92, NO. 3, MARCH 2004

Fig. 2.The principle of the UT.Athans [9], who developed a second-order Gaussian filter.This filter assumes that the model is piecewise quadraticand truncates the Taylor series expansion after its secondterm. However, to implement this filter, the Hessian (tensorof second-order derivatives) must be derived. Typically,deriving the Hessian is even more difficult than deriving aJacobian, especially for a sophisticated, high-fidelity systemmodel. Furthermore, it is not clear under what conditionsthe use of the Hessian will yield improved estimates whenthe strict assumption of Gaussianity is violated.In summary, the KF can be applied to nonlinear systemsif a consistent set of predicted quantities can be calculated.These quantities are derived by projecting a prior estimatethrough a nonlinear transformation. Linearization, as appliedin the EKF, is widely recognized to be inadequate, but the alternatives incur substantial costs in terms of derivation andcomputational complexity. Therefore, there is a strong needfor a method that is provably more accurate than linearization but does not incur the implementation nor computationalcosts of other higher order filtering schemes. The UT was developed to meet these needs.difference is that sigma points can be weighted in waysthat are inconsistent with the distribution interpretation ofsample points in a particle filter. For example, the weights.on the points do not have to lie in the rangevectors and theirA set of sigma points consists of. Theassociated weightscan be positive or negative but, to provide anweightsunbiased estimate, they must obey the condition(8)are calculated as follows.Given these points, and1) Instantiate each point through the function to yield theset of transformed sigma points2) The mean is given by the weighted average of thetransformed points(9)III. THE UNSCENTED TRANSFORMATIONA. Basic IdeaThe UT is founded on the intuition that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation [24].The approach is illustrated in Fig. 2—a set of points (sigmapoints) are chosen so that their mean and covariance areand. The nonlinear function is applied to each point, inturn, to yield a cloud of transformed points. The statistics ofthe transformed points can then be calculated to form an estimate of the nonlinearly transformed mean and covariance.Although this method bares a superficial resemblance toparticle filters, there are several fundamental differences.First, the sigma points are not drawn at random; they aredeterministically chosen so that they exhibit certain specificproperties (e.g., have a given mean and covariance). As aresult, high-order information about the distribution can becaptured with a fixed, small number of points. The second3) The covariance is the weighted outer product of thetransformed points(10)The statistics of any other function can be calculated in asimilar manner.One set of points that satisfies the above conditions conpoints that lie on thethsists of a symmetric set ofcovariance contour [24]JULIER AND UHLMANN: UNSCENTED FILTERING AND NONLINEAR ESTIMATION(11)405

whereis the th row or column5 of the matrix(the original covariance matrix multisquare root ofis the weightplied by the number of dimensions), andassociated with the th point.Despite its apparent simplicity, the UT has a number ofimportant properties.1) Because the algorithm works with a finite number ofsigma points, it naturally lends itself to being used ina “black box” filtering library. Given a model (withappropriately defined inputs and outputs), a standardroutine can be used to calculate the predicted quantitiesas necessary for any given transformation.2) The computational cost of the algorithm is the sameorder of magnitude as the EKF. The most expensiveoperations are calculating the matrix square root andthe outer products required to compute the covarianceof the projected sigma points. However, both opera, which is the same as evaluating thetions arematrix multiplications needed to calculatethe EKF predicted covariance.6 This contrasts withmethods such as Gauss–Hermite quadrature [26] forwhich the required number of points scales geometrically with the number of dimensions.3) Any set of sigma points that encodes the mean andcovariance correctly, including the set in (11), calculates the projected mean and covariance correctly tothe second order (see Appendixes I and II). Therefore, the estimate implicitly includes the second-order“bias correction” term of the truncated second-orderfilter, but without the need to calculate any derivatives.Therefore, the UT is not the same as using a central difference scheme to calculate the Jacobian.74) The algorithm can be used with discontinuous transformations. Sigma points can straddle a discontinuityand, thus, can approximate the effect of a discontinuityon the transformed estimate. This is discussed in moredetail in Section VII.The improved accuracy of the UT can be demonstratedwith the polar-to-Cartesian transformation problem.B. The Demonstration Revisitedof the true transformed distribution. This reflects the effectof the second-order bias correction term that is implicitlyand automatically incorporated into the mean via the UT.However, the UT covariance estimate underestimates thetrue covariance of the actual transformed distribution.This is because the set of points described above are onlyaccurate to the second order. Therefore, although the meanis predicted much more accurately, the UT predicted covariance is of the same order of accuracy as linearization. Thetransformed estimate could be made consistent by adding; however, the UT frameworkstabilising noise to increaseprovides more direct mechanisms that can greatly improvethe accuracy of the estimate. These are considered next.IV. EXPLOITING HIGHER ORDER INFORMATIONThe example in the previous section illustrates that the UT,using the sigma point selection algorithm in (11), has significant implementation and accuracy advantages over linearization. However, because the UT offers enough flexibility toallow information beyond mean and covariance to be incorporated into the set of sigma points, it is possible to pick aset that exploits any additional known information about theerror distribution associated with an estimate.A. Extending the Symmetric SetSuppose a set of points is constructed to have a given meanand covariance, e.g., according to (11). If another point equalto the given mean were added to the set, then the mean of theset would be unaffected, but the remaining points would haveto be scaled to maintain the given covariance. The scaled result is a different sigma set, with different higher moments,but with the same mean and covariance. As will be shown,weighting this newly added point provides a parameter forcontrolling some aspects of the higher moments of the distribution of sigma points without affecting the mean and covariance. By convention, letbe the weight on the meanpoint, which is indexed as the zeroth point. Including thispoint and adjusting the weights so that the normality, mean,and covariance constraints are preserved, the new point distribution becomes8Using sigma points determined by (11), the performanceof the UT is shown in Fig. 3. Fig. 3(a) plots the set oftransformed sigma points. The original set of pointswere symmetrically distributed about the origin and thenonlinear transformation has changed the distribution into atriangle with a point at the center. The mean and covarianceof the UT, compared to the true and linearized values, isshown in Fig. 3(b). The UT mean is extremely close to thatA PP AAP A A5If the matrix square rootof is of the form, then thesigma points are formed from the rows of . However, if the matrix squareroot is of the form, the columns of are used.6The matrix square root should be calculated using numerically efficientand stable methods such as the Cholesky decomposition [25].7Lefebrve recently argued for an alternative interpretation of the UT asbeing an example of least-squares regression [27]. Although regression analysis can be used to justify UT’s accuracy benefits over linearization, it doesnot provide a prescriptive framework, e.g., for deriving the extensions tohigher order information described in Section IV.406AA(12)8In the scalar case, this distribution is the same as the perturbation resultdescribed by Holtzmann [28]. However, the generalization of Holtzmann’smethod to multiple dimensions is accurate only under the assumption thatthe errors in each dimension are independent of one another.PROCEEDINGS OF THE IEEE, VOL. 92, NO. 3, MARCH 2004

(a)(b)Fig. 3. The mean and standard deviation ellipses for the true statistics, those calculated throughlinearization and those calculated by the UT. (a) The location of the sigma points which haveundergone the nonlinear transformation. (b) The results of the UT compared to linearization andthe results calculated through Monte Carlo analysis. The true mean is at and the uncertaintyellipse is dotted. The UT mean is at and is the solid ellipse. The linearized mean is at andits ellipse is also dotted. 2The value ofcontrols how the other positions willbe repositioned. If, the points tend to move further(a valid assumption because thefrom the origin. IfUT points are not a pdf and so do not require nonnegativityconstraints), the points tend to be closer to the origin.The impact of this extra point on the moments of the distribution are analyzed in detail in Appendix II. However, amore intuitive demonstration of the effect can be obtainedfrom the example of the polar-to-Cartesian transformation. Given the fact that the assumed prior distribution (sensorerror) is Gaussian in the local polar coordinate frame of thesensor, and using the analysis in Appendix II,can be justified because it guarantees that some of thefourth-order moments are the same as in the true Gaussiancase. Fig. 4(a) shows the points generated with the augmented set. The additional point lies at the mean calculated. The effect is shown in more detail inby the EKFFig. 4(b). These dramatic improvements come from theJULIER AND UHLMANN: UNSCENTED FILTERING AND NONLINEAR ESTIMATION407

(a)(b)Fig. 4. The mean and standard deviation ellipses for the true statistics, those calculated throughlinearization and those calculated by the UT. (a) The location of the sigma points which haveundergone the nonlinear transformation. (b) The results of the UT compared to linearization andthe results calculated through Monte Carlo analysis. The true mean is at and the uncertaintyellipse is dotted. The UT mean is at and is the solid ellipse. The linearized mean is at andits ellipse is also dotted. 2fact that the extra point exploits control of the higher ordermoments.9B. General Sigma Point Selection FrameworkThe extension in the previous section shows thatknowledge of higher order information can be partially9Recently, Nørgaard developed the DD2 filtering algorithm, which isbased on Stirling’s interpolation formula [29]. He has proved that it canyield more accurate estimates than the UT with the symmetric set. However,this algorithm has not been generalized to higher order information.408 incorporated into the sigma point set. This concept can begeneralized so that the UT can be used to propagate anyhigher order information about the moments.Because no practical filter can maintain the full distribumaytion of the state, a simpler distribution of the formis chosenbe heuristically assumed at time step . Ifto be computationally or analytically tractable to work with,and if it captures the critical salient features of the true distribution, then it can be used as the basis for a good approximatesolution to the nonlinear transformation problem. One possiPROCEEDINGS OF THE IEEE, VOL. 92, NO. 3, MARCH 2004

bility explored by Kushner is to assume that all distributionsis a Gaussian distributed randomare Gaussian [30], i.e.,.variable with mean and covarianceAlthough Gaussianity is not always a good assumptionto make in many cases, it provides a good example todemonstrate how a set of sigma points can be used to capture, or match, different properties of a given distribution.This matching can be written in terms of a set of nonlinearconstraints of the formFor example, the symmetric set of points given in (11)matches the mean and covariance of a Gaussian, and byvirtue of its symmetry, it also matches the third momentskewas well. These conditions oncan bewritten asThe constraints are not always sufficient to uniquely determine the sigma point set. Therefore, the set can be refinedwhich penalizesby introducing a cost functionundesirable characteristics. For example, the symmetric setgiven in (12) contains some degrees of freedom: the matrix. The analysis in Appendix IIsquare root and the valueaffects the fourth and higher moments of theshows thatsigma point set. Although the fourth-order moments cannotcan be chosen to minimize thebe matched precisely,errors.In summary, the general sigma point selection algorithmissubject to(13)Two possible uses of this approach are illustrated in Appendixes III and IV. Appendix III shows how the approachcan be used to generate a sigma point set that contains theminimal number of sigma points needed to capture meanpoints). Appendix IV generates aand covariance (points that matches the first four moments ofset ofa Gaussian exactly. This set cannot precisely catch the sixthand higher order moments, but the points are chosen to minimize these errors.Lerner recently analyzed the problem of stochasticnumerical integration using exact mononomials [31]. Assuming the distribution is symmetric, he gives a generalselection rule which, for precision 3, gives the set of pointsin (12) and, for precision 5, gives the fourth-o

linear KF equations. When the system is nonlinear, methods for approximating these quantities must be used. Therefore, the problem of applying the KF to a nonlinear system be-comes one of applying nonlinear transformations to mean and covariance estimates. B. Propagating Means and Covariances Through Nonlinear Transformations

Related Documents:

Unscented Kalman Filter (UKF): Algorithm [3/3] Unscented Kalman filter: Update step (cont.) 4 Compute the filter gain Kk and the filtered state mean mk and covariance Pk, conditional to the measurement yk: Kk Ck S 1 k mk m k Kk [yk µ ]

Introduction The EKF has been applied extensively to the field of non-linear estimation. General applicationareasmaybe divided into state-estimation and machine learning. We further di-vide machine learning into parameter estimation and dual estimation. The framework for these areas are briefly re-viewed next. State-estimation

Extended Kalman Filter (EKF) is often used to deal with nonlinear system identi cation. However, as suggested in [1], the EKF is not e ective in the case of highly nonlinear problems. Instead, two techniques are examined herein, the Unscented Kalman Filter method (UKF), proposed by Julier and

3 filtering and selective social filtering),6 Algeria (no evidence of filtering),7 and Jordan (selective political filtering and no evidence of social filtering).8 All testing was conducted in the period of January 2-15, 2010.

Keywords: Image filtering, vertically weighted regression, nonlinear filters. 1. Introduction Image filtering and reconstruction algorithms have played the most fundamental role in image processing and ana-lysis. The problem of image filtering and reconstruction is to obtain a better quality image θˆfrom a noisy image y {y

Nonlinear estimation techniques play an important role for process monitoring since some states and most of the parameters cannot be directly measured. There are many techniques available for nonlinear state and parameter estimation, i.e., extendedKalman filter (EKF),unscentedKalmanfilter (UKF), particlefiltering (PF)

in the general nonlinear case via interval analysis. Key Worda--Bounded errors; global analysis; guaranteed estimates; identification; interval analysis; nonlinear equations; nonlinear estimation; parameter estimation; set theory; set inversion.

Buku Ajar Keperawatan Kesehatan Jiwa/Ah. Yusuf, Rizky Fitryasari PK, Hanik Endang Nihayati —Jakarta: Salemba Medika, 2015 1 jil., 366 hlm., 17 24 cm ISBN 978-xxx-xxx-xx-x 1. Keperawatan 2. Kesehatan Jiwa I. Judul II. Ah. Yusuf Rizky Fitryasari PK Hanik Endang Nihayati UNDANG-UNDANG NOMOR 19 TAHUN 2002 TENTANG HAK CIPTA 1. Barang siapa dengan sengaja dan tanpa hak mengumumkan atau .