Joint Parameter And State Estimation Of Noisy Discrete-Time Nonlinear .

1y ago
12 Views
2 Downloads
818.69 KB
6 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Mariam Herr
Transcription

2021 60th IEEE Conference on Decision and Control (CDC)December 13-15, 2021. Austin, TexasJoint Parameter and State Estimation of Noisy DiscreteTime Nonlinear Systems: A Supervisory Multi-Observer ApproachT.J. Meijer 1,† , V.S. Dolk 2 , M.S. Chong 1 , R. Postoyan 3 , B. de Jager 1 , D. Nešić 4 and W.P.M.H. Heemels 1Abstract— This paper presents two schemes to jointly estimate parameters and states of discrete-time nonlinear systemsin the presence of bounded disturbances and noise. Theparameters are assumed to belong to a known compact set.Both schemes are based on sampling the parameter spaceand designing a state observer for each sample. A supervisorselects one of these observers at each time instant to producethe parameter and state estimates. In the first scheme, theparameter and state estimates are guaranteed to convergewithin a certain margin of their true values in finite time,assuming that a sufficiently large number of observers isused and a persistence of excitation condition is satisfied inaddition to other observer design conditions. This convergencemargin is constituted by a part that can be chosen arbitrarilysmall by the user and a part that is determined by the noiselevels. The second scheme exploits the convergence propertiesof the parameter estimate to perform subsequent zoom-inson the parameter subspace to achieve stricter margins for agiven number of observers. The strengths of both schemes aredemonstrated using a numerical example.I. I NTRODUCTIONOINT parameter and state estimation is a highly relevantproblem in many applications, such as synchronization ofdigital twins with their physical counterparts, see, e.g., [1],and sensor or source localization (in distributed parametersystems), see, e.g., [2]–[4]. In many cases such combinedestimation problems arise, even when the aim is to estimateonly the parameters of a system, as a result of the full stateeither not being measurable and/or measurements being corrupted by noise. A common approach to the joint parameterand state estimation problem is to augment the state withthe parameters (and add constant parameter dynamics) andformulate it as a state estimation problem [5]. The stateof the resulting system is then estimated using nonlinearstate estimation algorithms, such as nonlinear Kalman filtersor particle filters [5], however, in general the underlyingstructure of the original model is lost leading to a (highly)nonlinear state estimation problem. For example, the augmented state approach turns joint estimation of an uncertainlinear system with affine parameter dependencies into abilinear state estimation problem. Following this path, it istypically difficult to provide convergence results [6]. Jointparameter and state estimation schemes that do provideJ1 Tomas Meijer, Michelle Chong, Bram de Jager and MauriceHeemels are with the Department of Mechanical Engineering,Eindhoven University of Technology, P.O. Box 513, 5600 MBEindhoven, The Netherlands. {t.j.meijer; m.s.t.chong;a.g.de.jager; m.heemels}@tue.nl2 Victor Dolk is with ASML, De Run 6665, 5504 DT Veldhoven, TheNetherlands. victor.dolk@asml.com3 Romain Postoyan is with the Université de Lorraine, CNRS, CRAN, F54000 Nancy, France. romain.postoyan@univ-lorraine.fr4 Dragan Nešić is with the Department of Electrical and ElectronicEngineering, the University of Melbourne, Parkville, VIC 3010, Australia.dnesic@unimelb.edu.au†Corresponding author: T.J. Meijer.978-1-6654-3658-8/21/ 31.00 2021 IEEEvkukpwkPlantykπkŷ1,kObserver 1x̂1,k ,p̂1,kx̂k ,p̂kŷN,kObserver Nx̂N,k ,p̂N,kFig. 1. Supervisory observer scheme consisting of the ( ) multi-observerand ( ) supervisor. Relevant notation is introduced in Section III.analytical convergence results often apply only to specificclasses of systems, see, e.g., [6]–[8].In this paper, the joint parameter and state estimationproblem for discrete-time nonlinear systems in the presenceof bounded process and measurement noise is addressed ina different way. We exploit a supervisory observer framework that was recently developed in [9] for continuoustime systems and without the consideration of disturbancesand noise. It is assumed that the parameters are constantand belong to a known compact set, with no restriction onits ”size”. The so-called supervisory observer scheme, asdepicted in Fig. 1, consists of (a) the multi-observer, a bankof multiple state observers–each designed for a parametervalue sampled from the known parameter set–and (b) thesupervisor, which at any given time instant selects one of theobservers to provide the state and parameter estimates. Suchmulti-observer schemes have also been proved useful formany other purposes, such as, safeguarding systems againstsensor attacks, see, e.g., [10], and the context of adaptivecontrol, see, e.g., [11]. An advantage of this sampling-basedapproach compared to the augmented state-space approachis that, for each parameter sample, the structure of theunderlying system is preserved. This fact allows us to employobservers tailored to the specific model structure, whichcome with certain convergence guarantees and convenient(LMI-based) synthesis procedures, see, e.g., [12] for LPVsystems or [13] for a class of nonlinear systems. The convergence properties of the individual state observers in themulti-observer are combined with a persistence of excitation(PE) condition to arrive at convergence guarantees for thesupervisory observer. To be more concrete, the parameter andstate estimates are guaranteed to converge within a certainmargin of their true values, given that a sufficiently largenumber of observers is used. This sampling-based approach,5156

which uses a static sampling policy, is rather simple toimplement, but the number of samples (and, hence, the number of observers running in parallel) required to guaranteethat the parameter error converges to within a given margingrows exponentially with the dimension of the parameterspace. This inspired the development of a second scheme,which exploits the convergence result to iteratively zoomin by resampling from a shrinking subspace of the originalparameter space. The resulting dynamic sampling policy isable to, for a given number of observers, guarantee tighterbounds on the parameter and state estimates. Alternatively,the dynamic scheme can be used to achieve a given marginof convergence using fewer observers than the static scheme.The extension of the continuous-time results in [9] todiscrete-time is motivated by the fact that real-time implementation of any estimation algorithm requires discretization and that measurements become available at discretetime instances. Additionally, the discrete-time formulationenables parameter and state estimation of systems in feedback interconnection with a discrete-time control architecturesuch as model predictive control. The inclusion of processand measurement noise in the supervisory multi-observerframework is another major contribution, which allows usto provide more realistic performance guarantees for theproposed estimator, that was not addressed in [9], [14].However, it poses additional technical challenges includingdistinguishing between the effects of noise and parametererrors on our state and output estimation errors. In fact, thisis only possible to some extent and, unlike in the noiselesscase, the parameter error cannot be made arbitrarily small byusing sufficiently many observers. Moreover, the dynamicsampling policy has to take into account the noise levelswhen zooming in, requiring a careful analysis. The strengthof our framework is demonstrated on a numerical case studyin the presence of noise.The content of the paper is organized as follows. Theproblem definition is given in Section II. Section IIIpresents the discrete-time supervisory observer using a staticsampling policy. In Section IV, the supervisory observer isadapted to utilize dynamic sampling. Finally, a numericalcase study and conclusions are given in Sections V and VI.All proofs can be found in the full version of this paper [15].Notation. Let R ( , ), R 0 [0, ), R 0 (0, ), N {0, 1, . . .}, N[n,m] {n, n 1, . . . , m} Nfor n, m N and N n {n, n 1, . . .} N for n N.Moreover, k·kp : Rn R 0 with n N 1 denotes anarbitrary (but the same throughout the paper) p-norm on Rn ,and we omit the subscript p in the following, i.e., k·k k·kp . Let Bn (ξ, r) : {x Rn kx ξk 6 r} for n N 1represent the ball centered at ξ Rn of “radius” r R 0and let Bnr Bn (0, r) denote such a set centered at theorigin. For a sequence {xk }k N with xk Rn and n N,we denote k{xk }k {xk }k N p : supk N kxk kp wherethe subscript p is again omitted for the sake of compactness.The space of all bounded sequences taking values in Rnwith n N is denoted : {{xk }k N k{xk }k }.The notation (u, v) stands for [u v ] , where u Rmand v Rn with (n, m) N2 . A continuous functionα : R 0 R 0 is a K-function (α K) if it is strictlyincreasing and α (0) 0. If, in addition, α (r) asr , then α is a K -function (α K ). A continuousfunction β : R 0 R 0 R 0 is a KL-function (β KL)if β (·, s) K for each s R 0 , β (r, ·) is non-increasingand β (r, s) 0 as s for each r R 0 .II. P ROBLEM DEFINITIONConsider the discrete-time system given byxk 1 f (xk , p, uk , vk ) ,yk h (xk , p, uk , wk ) ,(1a)(1b)where xk Rnx , uk Rnu and yk Rny denote thestate, input and output, respectively, at time instant k N.In addition, the following assumptions are adopted.Assumption 1. The input uk , process noise vk and measurement noise wk in (1) are bounded, i.e., there exist constants u , v , w R 0 such that for all k Nuk Bn uu ,vk Bn vvand wk Bn ww .(2)Assumption 2. The parameter vector p is constant andunknown and it belongs to a given compact set P, i.e.,p P Rn p .Assumption 1 means that {uk }k N , {vk }k N , {wk }k N , which is a reasonable assumption in practice. It shouldbe noted that v and w in Assumption 1 do not needto be known to implement the estimation schemes, theirexistence alone is sufficient. The input uk and output ykare known/measured, while the full state xk , process noisevk and measurement noise wk are unknown. Moreover,the functions f : Rnx P Rnu Rnv Rnx andh : Rnx P Rnu Rnw Rny are given and his locally Lipschitz continuous. For any initial conditionx0 Rnx , input sequence {uk }k N with uk Rnu , processnoise sequence {vk }k N with vk Rnv , measurement noisesequence {wk }k N with wk Rnw for k N and parametersp P, the system (1) admits a unique solution defined forall k N. Finally, the following assumption is adopted.Assumption 3. The solutions to (1) are uniformly bounded,i.e., for all x , u , v , w R 0 , there exists a constantKx Kx ( x , u , v , w ) R 0 such that for all x0 Bn xx , {uk }k N with uk Bn uu , {vk }k N with vk Bn vvand {wk }k N with wk Bn ww for any k N, it holds thatxk BnKxx for all k N.The bound Kx in Assumption 3 does not need to be knownto implement the proposed estimation algorithms, only itsexistence has to be ensured.Our objective is to jointly estimate the parameter vectorp and the state of the system (1) (within certain margins)subject to bounded process noise vk and measurement noisewk , given the input uk and the measured output yk .III. S UPERVISORY OBSERVER : STATIC SAMPLING POLICYThe parameter and state estimation schemes presented inthis paper consist of two subsystems, as shown in Fig. 1.The first subsystem is the so-called multi-observer, whichis a collection of observers that operate in parallel, whereeach observer is designed for a different parameter vectorsampled from the parameter space. The second subsystemis a supervisor. The outputs of the observers are fed to the5157

supervisor, which selects one of the observers based on aselection criterion and outputs its state estimate and corresponding parameter sample as the estimates produced bythe overall estimation scheme. In this section, the parametersamples are obtained using a static sampling policy meaningthat these samples are fixed for all times. Later, in Section IV,we consider a dynamic sampling policy, which aims to reducethe computational complexity of the estimation scheme.A. Multi-observerThe parameter space P is sampled to produce N parametersamples p̂i P for i N : N[1,N ] . This sampling isperformed in such a way that the maximum distance of thetrue parameter to the nearest sample tends to zero as N tendsto infinity, i.e.,lim max min kp̂i pk 0.N p P i N(3)(4a)(4b)where x̂i,k Rnx and ŷi,k Rny denote, respectively, thestate and output estimate of the i-th observer at time k N.The function fˆ : Rnx {p̂i }i N Rnu Rny Rnxis well-designed such that the solutions to (4) are definedfor all time k N, any initial condition x̂i,0 Rnx , inputsequence {uk }k N , output sequence {yk }k N and parametersample p̂i P, i N .Let x̃i,k : x̂i,k xk denote the state estimation error,ỹi,k : ŷi,k yk the output estimation error and p̃i : p̂i pthe parameter estimation error of the i-th observer. Since Pis compact, there exists a compact set D Rnp such thatp̃i D for any p, p̂i P. The state and output estimationerrors are governed byx̃i,k 1 F (x̃i,k , xk , p̃i , p, uk , vk , wk ) ,ỹi,k H (x̃i,k , xk , p̃i , p, uk , wk ) ,B. SupervisorAt every time k N, the supervisor selects one observerfrom the multi-observer. To be able to assess the accuracy ofthe different observers, the supervisor computes a monitoringsignal for each observer, which, for i N , is given byµi,k k 1X2λk 1 j kỹi,j k ,(5a)(5b)where the functions F : Rnx Rnx D P Rnu Rnv Rnw Rnx and H : Rnx Rnx D P Rnu Rnw Rny are given by F (x̃, x, p̃, p, u, v, w) fˆ (x̃ x, p̃ p, u, h (x, p, u, w)) f (x, p, u, v)andH (x̃, x, p̃, p, u, w) h (x̃ x, p̃ p, u, 0) h (x, p, u, w).The observers (4) are assumed to be robust with respect tothe parameter error and noise in the following sense.Assumption 4. There exist functions α1 , α2 , α3 K and acontinuous non-negative function σ : D Rnv Rnw Rnx Rnu R 0 with σ (0, 0, 0, x, u) 0 for all x Rnx andu Rnu such that there exists a function V : P Rnx R 0 , which satisfies, for all x̃, x Rnx , p, p̂ P, u Rnu ,v Rnv and w Rnw , that V (p̂, ·) is continuous andα1 (kx̃k) 6 V (p̂, x̃) 6 α2 (kx̃k) ,(6a) V p̂, x̃ 6 V (p̂, x̃) α3 (kx̃k) σ (p̃, v, w, x, u) , (6b)for x̃ F (x̃, x, p̃, p, u, v, w).Assumption 4 implies that the error systems (5) corresponding to the observers in (4) are locally input-to-state stablek N,(7)j 0where λ [0, 1) is a design parameter. The i-th monitoringsignal (7) can be implemented using the difference equation2µi,k 1 λµi,k kỹi,k k ,This can be ensured, for instance, by employing a uniformsampling of the parameter space. For each p̂i , i N , a stateobserver is designed, given byx̂i,k 1 fˆ (x̂i,k , p̂i , uk , yk ) ,ŷi,k h (x̂i,k , p̂i , uk , 0) ,(ISS) with respect to p̃i , vk and wk [16]. For linear uncertainsystems, Luenberger observers satisfy Assumption 4 and, inSection V, it is shown that a class of circle-criterion-basednonlinear observers also satisfies this assumption.k N,(8)with the initial condition µi,0 0. The output errors ofthe state observers are assumed to satisfy the following PEcondition.Assumption 5. For any x̃ , x , u , v , w 0, thereexist a function αỹ K and an integer Npe Npe ( x̃ , x , u , v , w ) N 1 such that for all p̃i D,i N , x̃i,0 Bn xx̃ , p P, x0 Bn xx , {vk }k N withvk Bn vv and {wk }k N with wk Bn ww and for some inputsequence {uk }k N with uk Bn uu for k N, the solutionsto (1) and (5) satisfyk 1X2kỹi,j k αỹ (kp̃i k) ,k N Npe .(9)j k NpeAssumption 5 differs from the classical PE condition, see,e.g., [17], in that it considers solutions to (5b) parametrizedby p̃i and requires the sum in (9) to grow with the norm ofthe parameter error. This ensures that the supervisor is able toinfer quantitative information about the parameter estimationerror of each state observer based on its monitoring signal.At every time instant k N, the supervisor selects (one of)the observer(s) with the smallest monitoring signal to obtainthe estimates of p and xk . In the event that mini N µi,k isnot unique any observer from this subset can be chosen,resulting in a selection criterion where the index of theselected observer πk : N N satisfiesπk arg min µi,k ,i Nk N.(10)The resulting parameter estimate, state estimate and stateestimation error at time k N, denoted p̂k , x̂k and x̃k ,respectively, are defined using πk asp̂k : p̂πk ,x̂k : x̂πk ,kandx̃k : x̃πk ,k .(11)C. Convergence guaranteesThe parameter and state estimates (11) converge to withincertain margins of their true values p and xk as stated in thefollowing theorem.Theorem 1. Consider the system (1), the multi-observer (4),the monitoring signals (7), the selection criterion (10),the parameter estimate, state estimate and state estimation error in (11). Suppose Assumptions 1-5 hold. For any5158

x̃ , x , u , v , w R 0 and any margins νp̃ , νx̃ R 0 ,there exist functions υp̃ , υx̃ , ωp̃ , ωx̃ K , constant Kx̃ R 0 and sufficiently large integers N ? N 1 and M N 1such that for any N N N ? , it holds for any x̃i,0 Bn xx̃ ,i N , p P, x0 Bn xx , {vk }k N with vk Bn vv , {wk }k Nwith wk Bn ww and for some input sequence {uk }k N withuk Bn uu for k N, which satisfies Assumption 5, thatx̃i,k BnKxx̃ for all k N, i N , andkp̃πk k 6 νp̃ υp̃ (k{vj }k) ωp̃ (k{wj }k) , k N M ,lim sup kx̃k k 6 νx̃ υx̃ (k{vj }k) ωx̃ (k{wj }k) .(12)k In the noiseless case, i.e., vk wk 0, the convergencemargins can be made arbitrarily small since νp̃ and νx̃ can bemade arbitrarily small by using sufficiently many observers.However, this is impossible in the presence of noise due tothe terms in (12) depending on k{vk }k and k{wk }k.IV. S UPERVISORY OBSERVER :DYNAMIC SAMPLING POLICYIn this section we develop a dynamic sampling policyfor joint parameter and state estimation of (1). As statedin Theorem 1, when using a sufficiently large number ofobservers N , the parameter estimate converges to a givenmargin within a finite time. We exploit this result in thedynamic sampling policy to iteratively zoom in on the parameter subspace defined by the aforementioned margins throughresampling. As a result, stricter bounds on the parameterand state estimates can be guaranteed compared to the staticsampling policy for a given number of observers.A. Dynamic sampling policySince the parameter set P is compact, there exist pc Rnpand 0 R 0 such thatP Bnp (pc , 0 ) .(13)Let ν R 0 denote the desired bound on the parametererror, which either represents the required bound on theparameter error to guarantee asymptotic convergence of thestate estimation error to within a desired margin or a desiredbound imposed directly on the parameter estimation error.We also introduce a design parameter α (0, 1), the socalled zooming factor, which determines the rate at whichthe considered parameter set shrinks. The dynamic samplingpolicy is initialized at k k0 0 by sampling P0 : P,using a sampling scheme, which satisfies (3), to obtainN N 1 parameter samples p̂i,0 , i N . Here, N ischosen sufficiently large such that, by Theorem 1, it holdsfor sufficiently large M N 1 that kp̂πk ,0 pk 6 C for allk N M withC : max {ν, α 0 } υp̃ (k{vj }k) ωp̃ (k{wj }k) . (14)As a consequence, for k N M , either the desired marginis achieved or p P1 : Bnp (p̂πk ,0 , α 0 υp̃ (k{vj }k) ωp̃ (k{wj }k)) P0 . Both cases cannot be distinguished online since the true parameter is unknown. Therefore, atk k1 with k1 N M , even if the desired margin hasalready been achieved, the set P1 is sampled to obtain N newsamples {p̂i,1 }i N . This procedure is performed iterativelyat every km , m N, withMd : km 1 km ,(15)where Md N max{1,M } denotes the number of timesteps between subsequent zoom-ins. The shrinking parametersubset Pm , m N, is defined recursively byPm 1 : Dm Pm(16)mwith P0 P, m : α m 1 α 0 and D m Bnp p̂πkm 1 ,m , m 1 υp̃ (k{vj }k) ωp̃ (k{wj }k) . Thespaces Pm , m N, are sampled in such a way thatmax min kp̂i,m pk 6 ρ ( m , N )p Pm i N(17)with ρ KL and where {p̂i,m }i N denote the obtainedsamples. The corresponding parameter errors are denotedp̃i,m p̂i,m p, i N and m N. It is worth mentioningthat once the desired margin is achieved the algorithm stillkeeps zooming in and it can occur that, after zooming ina certain number of times, the subset that is being sampledno longer contains the true parameter. Regardless, the trueparameter still lies within the desired margin of the selectedparameter estimate and the convergence guarantees providedin this section remain valid.The dynamic sampling policy is incorporated into themulti-observer by designing state observers for each parameter sample p̂i,m , i N , for the time instance k N[km ,km 1 1] , m N. The i-th state observer is given byx̂i,k 1 fˆ(x̂i,k , p̂i,m , uk , yk ),ŷi,k h(x̂i,k , p̂i,m , uk , 0),(18a)(18b)for k N[km ,km 1 1] , m N. Here, fˆ : Rnx {p̂i,m }i N ,m N Rnu Rny Rnx is well-designed suchthat the solutions to (18) are defined for all k N and anyinitial condition x̂i,0 Rnx , input sequence {uk }k N , outputsequence {yk }k N and parameters p̂i,m P, i N andm N. We assume these observers satisfy Assumption 4.The dynamic sampling policy requires the monitoringsignals used by the supervisor to be redefined. The redefinedmonitoring signals are reset upon resampling, i.e.,(2kỹi,k k ,k {km }m N ,µi,k 1 (19)2λµi,k kỹi,k k ,k N \ {km }m N ,for i N , with λ [0, 1). As before, the supervisor selectsan observer from the multi-observer (18) using the signal πkas defined in (10). The definition of the state estimate x̂k andcorresponding error x̃k in (11) are unchanged, however, theparameter estimate and corresponding error are redefined asp̂k : p̂πk ,m and p̃k : p̃πk ,m , for k N[km 1,km 1 ] . (20)B. Convergence guaranteesThe parameter and state estimates produced by the supervisory observer using a dynamic sampling scheme satisfysimilar convergence guarantees as in the static sampling case.This is stated in the following theorem.Theorem 2. Consider the system (1), the multiobserver (18), the monitoring signals (19), the selectioncriterion (10), the parameter estimate and correspondingerror (20), the state estimate and corresponding errorin (11) and the dynamic sampling policy (15)-(16). SupposeAssumptions 1-5 hold. For any x̃ , x , u , v , w R 0 ,any margins νp̃ , νx̃ R 0 and zooming factor α (0, 1),5159

there exist functions υp̃ , υx̃ , ωp̃ , ωx̃ K , scalar Kx̃ R 0and sufficiently large integers M ? N 1 , M̄ N 1 andN ? N 1 such that for any N N N ? and Md N M ? ,it holds for any x̃i,0 Bn xx̃ , i N , p P, x0 Bn xx ,nw{vk }k N with vk Bn vv , {wk }k N with wk B and forwnusome input sequence {uk }k N with uk B u for k N,which satisfies Assumption 5, that x̃i,k BnKxx̃ for all k N,i N , andκw , due to its dependence on p̂ and p. In order to solve (25),it either needs to be discretized or, as we will see in our casestudy, sometimes structure can be exploited to reduce (25) toa finite number of LMIs. If Proposition 1 and Assumption 5apply, then Theorems 1 or 2 hold, respectively, when thestatic or dynamic sampling policy is used.Consider the system (22) with the following matrices T 1 1 s1 TA(p) 0 1s p 2 2 Ts , G(p) p 2 ,1 1Ts 1TTH 1 , B(p) Ts p Tsand C I,kp̃k k 6 νp̃ υp̃ (k{vj }k) ωp̃ (k{wj }k) , k N M ,lim sup kx̃k k 6 νx̃ υx̃ (k{vj }k) ωx̃ (k{wj }k) .(21)sk s(26)Theorem 2 ensures the same guarantees as in Theorem 1,but it typically requires less observers to do so using thedynamic sampling policy, as will be illustrated in Section V.which is obtained by discretizing a continuous-time system,see [13], with sampling time Ts 0.01. The nonlinearityin (22) is given byRemark 1. The dynamic sampling policy in this paperuses a fixed number of samples, however, an alternativepolicy using the DIRECT algorithm which adds sampleson-line is proposed for the continuous-time setting in [14].This eliminates the need to estimate the required number ofobservers a-priori, which can be challenging.φ(v) v sin(v),V. C ASE STUDYIn this section, we apply the results of Theorems 1 and 2to estimate the parameters and states of an example withinthe class of nonlinear systems given byxk 1 A (p) xk G (p) φ (Hxk ) B (p) (uk vk ) ,yk Cxk wk ,(22)where xk Rnx , uk Rnu , vk Rnv , wk Rnw andp Rnp . Suppose Assumptions 1-3 hold and A(p), B(p)and G(p) are continuous in p on P. The nonlinearity φ :Rnφ Rnφ is such that φ(v) (φ1 (v1 ), . . . , φnφ (vnφ )) forv (v1 , . . . , vnφ ) Rnφ and there exist constants i R 0 ,i N[1,nφ ] , such that, for all v R, we have06 φi (v)6 i . v(23)For p̂ P, a state observer of the form [13], [18]x̂k 1 A(p̂)x̂k G(p̂)φ(H x̂k K(p̂)(C x̂k yk )) B(p̂)uk L(p̂)(C x̂k yk ),(24a)ŷk C x̂k ,(24b)is designed by synthesizing observer matrices K (p̂) andL (p̂) such that the following proposition applies.Proposition 1. Consider the system (22) and state observer (24). Suppose there exist P P Rnx nx ,M diag(m1 , . . . , mnφ ) with mi R 0 , i N[1,nφ ] , andκx̃ , κv , κw R 0 , such that P 0 and, for all p, p̂ P, P A (p̂)P G (p̂)P B (p)PL (p̂)P ?κx̃I 21 P21M H(p̂)200? M Λ 10 21 K (p̂)M? κ2v I0? κ2w I 4 0, (25)where A(p̂) A(p̂) L(p̂)C, H(p̂) H K(p̂)C andΛ diag( 1 , . . . , nφ ), then Assumption 4 is satisfied.The condition in (25) represents infinitely many linear matrixinequalities (LMIs) in P , P L(p̂), M , M K(p̂), κx̃ , κv and(27)which satisfies (23) with Lipschitz constant 1 2. Moreover, the parameter p belongs to P : [1, 50]. This exampleis a variation on [13, Example 1] and [18, Example 1]where we included process and measurement noise and anadditional parameter dependency in B(p). Notice that thesystem matrices (26) all depend affinely on the unknownparameter. If we restrict the observer matrices L(p̂) and K(p̂)to also be affine in p, i.e.,L(p̂) L0 p̂L1andK(p̂) K0 p̂K1 ,(28)with Li Rnx ny and Ki Rnφ ny , i 0, 1, the LMIin (25) becomes affine in (p, p̂) P P. Since P Pis convex, the condition (25) is satisfied for all p, p̂ Pif and only if it is satisfied at each of the (2np )2 4vertices [19]. We set κx̃ 0.1 and minimize κv 5κwsubject to (25), for all p, p̂ {1, 50}, by means of theMATLAB toolbox YALMIP [20] together with the externalsolver MOSEK [21]. Restricting ourselves to a Lyapunovfunction, which is independent of p̂ and to affine observermatrices (28) introduces conservatism compared to, for instance, sampling the parameter space and then solving theLMIs. However, it has the advantage that resampling in thedynamic sampling policy is computationally efficient as itonly requires evaluating (28) without solving LMIs on-line.Both the static and dynamic sampling policy are implemented using N 10 equidistant parameter samples, i.e.,p̄ pp̄ pp̂i p 2N (i 1) N , where p and p̄ denote the extremaof the set that is currently being sampled (for the dynamicscheme p and p̄ will move closer together over time). For thissampling scheme, we can guarantee (17) with ρ( m , N ) mN , as the distance between the true parameter and thenearest sample never exceeds half the distance betweenneighbouring samples. This also guarantees (3) for the staticsampling since ρ( 0 , N ) 0 as N . We simulate boththe static and the dynamic schemes with design parametersλ 0.995, Npe Md 1 · 103 (which corresponds to10 seconds) and α 0.8. The resulting parameter estimateand norm of the state error are shown in Fig. 2 togetherwith the shrinking parameter set and estimated noise levelυp̃ ( v ) ωp̃ ( w ) with v w 0.01.Figure 2 shows that for both schemes the parameterestimates as well as the state estimate converge within acertain margin of their true value. As can be seen in Fig. 2,5160

number of observers a-priori and can be extended to thenoisy case to overcome one of these drawbacks. Obtainingnon-conservative estimates of the contribution of the noise onthe convergence margins is another important research topic.Extending the framework to stochastic noise assumptionsmay improve performance at the cost of not having worstcase convergence guarantees. Finally, allowing for slowlytime-varying parameters, such as in [22], in our estimationframework is an interesting future research direction.252015105DynamicStaticNoise levelsTrue parameter001020304050607080901000.2StaticDynamicR EFERENCES0.150.10.0500102030405060708090100Time [s]Fig. 2. Parameter estimate (top) and norm of the state estimation error(bottom) using the static (dashed) and dynamic sampling policy (solid). Thetransparent regions indicate the set being sampled by the dynamic samplingpolicy. The dotted black line indicates the true parameter and the dashedblack lines are the noise levels.the first resampling occurs after 10 seconds, which causesthe parameter estimate to jump. The spikes in the parameterestimation error at the switching instances are a result ofthe monitoring signals being reset, which may cause thesupervisor to select a ”suboptimal” observer temporarily.Figure 2 also shows that the estimates do not necessarilybecome more accurate after individual zoom-ins, which isexplained by the fact that if one parameter sample happensto be very accurate, it is not necessarily preserved during theresampling. It should be noted that the number of observersN 10 used here is significantly less than the theoreticalestimates. To be more specific, our estimates dictate

nonlinear state estimation problem. For example, the aug-mented state approach turns joint estimation of an uncertain linear system with afne parameter dependencies into a bilinear state estimation problem. Following this path, it is typically difcult to provide convergence results [6]. Joint parameter and state estimation schemes that do provide

Related Documents:

dimensional both in parameter and state spaces. Online parameter estimation in nonlinear and non-Gaussian systems is a challenging task. It is still an open research problem in the SMC community. Russell's group at UC Berkeley has an ongoing algorithmic research effort in the direction of high-dimensional parameter estimation, (for

Introduction The EKF has been applied extensively to the field of non-linear estimation. General applicationareasmaybe divided into state-estimation and machine learning. We further di-vide machine learning into parameter estimation and dual estimation. The framework for these areas are briefly re-viewed next. State-estimation

study makes explicit the deep links between model singularities, parameter estimation rates and minimax bounds, and the algebraic geometry of the parameter space for mixtures of continuous distributions. The theory is applied to establish concrete convergence rates of parameter estimation for finite mixture of skewnormal distributions.

where y ϕT θ is the system description for the parameter estimation. y andϕ are the outputs and states; θˆ are the real and estimated parameter vectors respectively. λ is a positive forgetting factor, which is chosen less than 1. A small forgetting factor results in fast convergence rate of the parameter estimation but large noise level .

This document describes the use of PEST, a model-independent parameter optimiser. Nonlinear parameter estimation is not new. Many books and papers have been devoted to the subject; subroutines are available in many of the well-known mathematical subroutine libraries; many modelling packages from all fields of science include parameter estimation as

For the purpose of data estimation, 10 000 data points were used for training and 4000 for test-ing. The parameter estimation was pursued with Matlab software. The paper is structured as follows. The mod-elling method, or more precisely the parameter-estimation method, is described in the next sec-tion. Section 3 deals with the obtained results .

appropriate parameter values are used so that model predictions match the underlying process behaviour. Obtaining good parame-ter values requires informative data for parameter estimation, as well as reliable parameter estimation techniques. It is particularly difficult to estimate parameters in ordinary dif-ferential equation (ODE) models.

Tulang hyoid (1) bersama dengan cartilages menyusun rangka dari larinx. Hyoid terletak pada dasar lidah dan melekat pada dasar tl tengkorak (skull) dengan bantuan ligaments. Source: Wesley Norman, PhD, DSc (1999 ), Homepage for the Anatomy Lesson.html . THE STERNUM STERNUM (1) : berbentuk palang terletak di tengah dada. Bersama dgn tulang rusuk (rib) menyusun rongga Thorax. The sternum .