Adaptive Predictive Controllers Using A Growing And .

3y ago
25 Views
2 Downloads
609.69 KB
14 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Alexia Money
Transcription

Iran. J. Chem. Chem. Eng.Vol. 30, No. 2, 2011Adaptive Predictive ControllersUsing a Growing and Pruning RBF Neural NetworkJafari, Mohammad Reza; Salahshoor, Karim* Department of Automation and Instrumentation, Petroleum University of Technology, Tehran, I.R. IRANABSTRACT: An adaptive version of growing and pruning RBF neural network has been usedto predict the system output and implement Linear Model-Based Predictive Controller (LMPC) andNon-linear Model-based Predictive Controller (NMPC) strategies. A radial-basis neural networkwith growing and pruning capabilities is introduced to carry out on-line model identification.An Unscented Kalman Filter (UKF) algorithm with an exponential time-varying forgetting factorhas been presented to enable the neural network model to track any time-varying process dynamicchanges. An adaptive NMPC has been designed based on the sequential quadratic programmingtechnique. The paper makes use of a dynamic linearization approach to extract a linear modelat each sampling time instant so as to develop an adaptive LMPC. The servo and regulatingperformances of the proposed adaptive control schemes have been illustrated on a non-linearContinuous Stirred Tank Reactor (CSTR) as a benchmark problem. The simulation results demonstratethe capability of the proposed identification strategy to effectively identify compact, accurate andtransparent model for the CSTR process. It is shown that the proposed adaptive NMPC controllerpresents better improvement with faster response time for both servo and regulatory controlobjectives in comparison with the proposed adaptive LMPC, an adaptive generalized predictivecontroller based on Recursive Least Squares (RLS) algorithm and well-tuned PID controllers.KEY WORDS: Neural networks, On-line identification, Adaptive control, Model-based predictivecontrol, CSTR.INTRODUCTIONModel Predictive Control (MPC) approaches have beenrecognized as the accepted standard to cope with someof the difficult control problems in process industry [1,2].Their ability to handle input and output constraints, timedelays, non-minimum phase behaviour and multivariablesystems have made them very attractive to the industrialusers. The core of all MPC algorithms is the movinghorizon strategy. An identified process model is usedto predict the future response and then, the control actionis optimally determined so as to obtain the desiredperformance over a finite time horizon. Thus, the choiceof process model representation is a crucial and importantissue in MPC. Most of the predictive controllers stilluse an explicit linear model of the process to be controlled.In practice, however, most of the industrial processesposses severe non-linear dynamics. This makes the linear* To whom correspondence should be addressed. E-mail: salahshoor@put.ac.ir1021-9986/11/ 2/12514/ /3.40125

Iran. J. Chem. Chem. Eng.Jafari M.R. & Salahshoor K.controller to be less effective or even detrimental whenthe process operates over a wide range of operatingconditions leading to time-varying model structures andparameters in an unknown manner. Adaptive controlstrategy is an interesting idea to cope with such difficultmodel uncertainties by providing the ability to trackvariations in process dynamics. Most adaptive controltechniques, however, are based upon a fixed linearprocess model structure whose parameters are only freeto be tuned to any possible process dynamic changes.This approach may have limited success because anystructural dynamic changes should be adapted viathe model parameters tuning.Neural Networks (NN) have shown to have goodapproximation capability for modelling non-linearsystems. A large number of predictive control schemeshave been developed based on multi-layer neural networkmodels since 1990. Classical neural networks have been usedfor the identification tasks [3-5], often as a part ofadaptive predictive control schemes [6, 7]. However,there is no general procedure to choose the requirednumber of layers and neurons to achieve an accurateapproximation in a given control problem. The usualpractice is to use enough neurons to capture the complexityof the underlying process dynamic without having theNN overfit the training data. For nonlinear black boxidentification, however, there is no guarantee thatthe fixed number of assumed neurons can cover the processoperating range. Radial Basis Function (RBF) neuralnetworks have been popularly used in many controlapplications in recent years. This is due to their abilityto approximate complex non-linear mappings directly fromthe input-output data with a simple topological dynamicstructure. Combining this network with self-generatingnetwork algorithms offers an attractive approach to makeefficient adaptive neural network which can adjustits dynamic structure complexity to varying non-linearprocess dynamic without requiring a prior knowledge.Several self-generating network algorithms, such asResource Allocation Network (RAN) and minimum RANhave been proposed in the literature for training RBFneural networks [8,9]. In recent years, difficult methods [10-12]have been proposed in which sequential learningalgorithms are used. Huang et. al. [13] proposed a simplesequential learning algorithm with network Growing andPruning (GAP) capabilities based on the relationship between126Vol. 30, No. 2, 2011the significance of a neuron and the required modelaccuracy for RBF networks, referred to as GAP-RBF.The original GAP-RBF algorithm has been modifiedin this paper to enhance its performance for on-lineidentification of non-linear systems. The new modifiedGAP-RBF (MGAP-RBF) neural network is used asa generic model for on-line identification of non-linearsystems. The Unscented Kalman Filter (UKF) estimationalgorithm has been introduced as a new learningalgorithm to recursively updates the free parameters ofthe MGAP-RBF neural network. An exponentialforgetting factor scheme has been included in theUKF algorithm to enable its tracking feature againstany possible time-varying system dynamic change.This paper proposes two indirect adaptive predictivecontrollers based on the MGAP-RBF neural network.An adaptive Non-linear MPC (NMPC) has been developedwithout restricting the process model to linear dynamics.The second proposed predictive controller is based on thepopular Generalized Predictive Control (GPC) strategy.Finally, the performances of the proposed adaptivepredictive controllers are illustrated on a simulatednon-linear Piovoso CSTR.The paper is organized as follows. First, the on-linenon-linear system identification using the MGAP-RBFneural network is presented. Second, the proposed NMPCand GPC controllers are developed by employing theidentified MGAP-RBF model. The remainder of the paperis devoted to demonstrating the servo and regulatoryperformances of the proposed adaptive predictivecontrollers on CSTR benchmark simulation problem.Dynamic Non-linear System IdentificationGAP-RBF AlgorithmGAP-RBF neural network is based on the GaussianRBF neural networks. The output of a Gaussian RBFnetwork with K hidden neurons can be describedas follows:Kf (x n ) α k φk (x n )(1)k 1where xn is the input vector of the network, αk isthe connecting weight of the kth hidden neuron to the outputneuron, and φk (x n ) denotes a response of the kth hiddenunit to the input vector xn, defined by the followingGaussian function:

Iran. J. Chem. Chem. Eng.φk (x n ) exp x n µkAdaptive Predictive Controllers Using .2(2)σ2kwhere µk and σk refer to the centre and width of thekth hidden neuron respectively, and.indicates theEuclidean norm.During the sequential learning process of GAP-RBF,a series of training samples (xn,yn) , n 1,2, arerandomly drawn from a range X with a sampling densityfunction of P(X) and presented one-by-one to thenetwork. Each training samples would trigger the actionof adding a new hidden neuron, pruning the nearesthidden neuron, or adjusting the parameters of the nearesthidden neuron, based on only the significance of thenearest hidden neuron to the training sample. This isin contrast with the MRAN learning algorithm [9] in whichall the neurons will be checked for adding, pruning andadjusting purposes. This results in a reduction in theoverall computations and thereby increasing the learningspeed. The significance of the kth hidden neuronis defined as [13]:E sig (k) (1.8σ k )l α kS(X)(3)where l is the dimension of the input space ( x ℜl ),and S(X) denotes the estimated size of the range X wherethe training samples are drawn from.For both growing and pruning, it is shown that [13]one needs to check only the nearest neuron based on thefollowing Euclidean distance to the current input data xnfor its significance:x n µ nr min( x n µ k )), k 1,., Kk(4)where µnr is the centre of the hidden neuron whichis nearest to xn.The learning process of GAP-RBF begins withno initial hidden neurons similar to MRAN. As newobservation data (xn,yn) are received during the training,some of them may initiate new hidden neurons basedon the growing criteria. However, the newly addedneuron may have insignificant contribution to the overallperformance of the whole network, and hence this neuronshould not be added at all. Therefore, GAP-RBF usesthe following enhanced growing criterion for eachVol. 30, No. 2, 2011new observation data (xn,yn) to prevent adding insignificantneuron leading to a smooth growing process:x n µ nr ε ne n e min(5)(1.8κ x n µ nrlen )S(X) e minwhere xn is the latest input received, µnr is the centreof the hidden neuron nearest (in the Euclidean distance )to xn. emin is the desired approximation accuracy and εn isa threshold to be selected appropriately. If the growingcriteria (5) are satisfied for a new observation, a newsignificant neuron K 1 will be added and the parametersassociated with the new hidden neurons are takenas follows:α K 1 enµ K 1 x n(6)σK 1 κ x n µ nrwhere en yn - f(xn).In this case, all the other present neurons (k 1, , K)will remain as significant and their parameters will beunchanged. Thus, pruning checking need not be doneafter a new neuron is added. However, if a newobservation (xn,yn) arrives and the growing criteria (5)is not satisfied, no new neuron will be added and only theparameters of the nearest neuron ( α nr , µ nr , σnr ) will beadjusted using the EKF or UKF learning algorithm. Then,the significance of the nearest (i.e., most recently adjusted)neuron is checked via the following pruning criterion:E sig (nr) (1.8σnr )l α nr eminS(X)(7)If the average contribution made by the nearestneuron in the whole range X is less than the expectedaccuracy emin, it is taken as insignificant and should beremoved. As discussed, at any time instant, only thesingle nearest neuron needs to be adjusted or needs to bechecked for growing and pruning.The complete description of the GAP-RBF learningalgorithm [13] can be summarized as follows:127

Iran. J. Chem. Chem. Eng.Jafari M.R. & Salahshoor K.Given an expected desired accuracy emin for eachobservation data (xn,yn), where x n ℜl, do the followingsteps:Compute the overall network output:Kf (x n ) α k exp k 11x n µkσ 2k2(8)where K is the number of hidden neurons.Calculate the parameters required in the growth criterion:ε n max{ε max γ n , ε min }, (0 γ 1)(9)e n y n f (x n )where εmin and εmax are minimum and maximumdistance thresholds, respectively.Apply the growth criterion for adding neurons:en e minIfandx n µ nr ε nandl(1.8κ x n µ nr ) e n / S(X) eminAllocate a new hidden neuron K 1 with:α K 1 e n(10)µ K 1 x nσ K 1 κ x n µ nrElseAdjust the network parameters α nr , µ nr , σnr for thenearest neuron only, using the EKF algorithm.Check the pruning criterion for the nearest (nrth)hidden neuron:If(1.8σ nr )l α nr / S(X) e min , remove the nearest(nrth) hidden neuron and do the necessary changes in theEKF algorithm.EndifEndifMGAP-RBF ALGORITHMThe original GAP-RBF algorithm has been modifiedas follows to enhance its capabilities for on-line systemidentification applications: Enhancing the smooth creation of the neurons. Enhancing the pruning criterion to prevent probableoscillation in the number of created neurons. Utilization of the UKF estimation algorithmto adjust free network parameters.128Vol. 30, No. 2, 2011 Utilization of a time-varying forgetting factorscheme to maintain a desired parameter tracking capability.The proposed modifications can be described in thefollowing two sections:a) The Modified Growing and Pruning CriteriaIn order to have smoothly output response and avoidoscillation, the mechanism of adding and pruning shouldbe allowed to change smoothly. The rate of addingor pruning of neurons can be controlled with thresholdvalues of en and εn. Selection of these values depends onthe complexity of the system, input data for identificationand the required accuracy for the model. But the mostimportant and effective factor is the persistent excitation(PE) property of the input data. If the input data haveenough degree of PE, smooth and accurate output can beobtained with suitable adjustment of the threshold values.But, if the inputs do not possess PE property, which mayoccur for instance in the case of closed-loop identification,then tuning the threshold values can not help and hencesome desired modifications on the growing and pruningcriteria will be necessary.In on-line applications, identification usually startswith not exact prior knowledge about the networkstructure and parameters. Thus, it is a better approachto allow the identification algorithm to adapt its modelingprocess with an initial higher increase in its rate ofneurons growth in order to improve such uncertaincircumstances in the beginning as fast as possible. Then,as the identification process continues on and the inputdata is more prone to lose its richness property (PE),it would be logical to decrease the modeling sensitivityby lowering the rate of neurons growth. However,evaluating the original GAP-RBF algorithm [13] and itsapplication reported in [10-14] demonstrates that theneurons are added hardly in the start of the modelingprocess. This can cause large initial errors in theidentification and the EKF learning algorithm may not beable to estimate parameters properly under such underparameterized situation. As the rate of neuron creation inGAP-RBF can be controlled by εn (Eq. (9)), the followingexponential time-varying pattern is proposed to makea gradual evolution of εn from an initial higher sensitivityεmin to a final lower sensitivity εmax:ε n ε min (εmax ε min )(1 e n / τ )(11)

Iran. J. Chem. Chem. Eng.Adaptive Predictive Controllers Using .where τ is the time-constant parameter that can beused to control the time rate evolution of εn .Another problem is due to probable oscillation in thenumber of created neurons which can cause big errorsin the identification results. This phenomenon can occurin on-line identification especially when the number of createdneurons are small and the input data have small degreeof PE, too. These detrimental effects can be improvedby changing the pruning criterion (Eq. (7)) as follows:(1.8σ nr)l αnr/ S(X) β emin(12)Vol. 30, No. 2, 2011The UKF algorithm can be implemented by thefollowing steps:1. Initialize with some initial guesses for the stateestimate (x0) and the error covarince matrix (P0), defined as:x̂ 0 Ε[x 0 ](14)P0 Ε[(x 0 xˆ 0 )(x 0 xˆ 0)T ]For k {1,., } , (E[.] denotes the expected value).2. Calculate the sigma points:χ k 1 xˆ k 1xˆ k 1 γ Pk 1 xˆ k 1 γ Pk 1(15)in which a new pruning factor ( 0 β 1 ) has been added.where λ α 2 (L k) L and γ L λ are scalingb) The UKF Learning AlgorithmThe original GAP-RBF algorithm uses the ExtendedKalman Filter (EKF) as its parameters adjustingalgorithm. In practice, however, the use of the EKF hastwo well-known drawbacks: Linearization can produce highly unstable filtersif the assumptions of local linearity are violated. The derivation of the Jacobian matrices is nontrivialin most applications and often lead to significantimplementation difficulties.To address these limitations, Julier & Uhlmann [11,15]developed the UKF algorithm.Let the process to be estimated and the associatedobservation relationship be described by the followingnon-linear state space model:parameters.3. Time update equations:χ k k 1 xˆ k xˆ k γ Pk xˆ k γ Pk (19)x k 1 f (x k , u k ) w kΥ k k 1 h[χ k k 1 ](20)(13)χ*k k 1 f [ χ k 1 , u k 1 ]2LWi(m) χ*i,k k 1x̂ k (17)i 02LWi(c) χ*i,k k 1 xˆ kPk χ*i,k k 1 xˆ kT Rw(18)i 0{}where Wi(m) and Wi(c) are sets and scalar weights,wR is process noise covariance, andy k h(x k ) v kWhere xk represents the hidden states, uk is the vectorof known exogenous inputs, and yk represents the vectorof noisy measured outputs. The random variables wk andvk represent process and measurement noises, respectively.Instead of linearizing these non-linear modelequations using Jacobian matrices in the EKF, the UKFuses a “deterministic sampling” approach to calculate themean and covariance estimate of Gaussian random statevariables (xk) with a minimal set of 2L 1 sample points(L is the state dimension), called as sigma points.The results are accurate to the third-order (Taylor seriesexpansion) for Gaussian inputs for all non-linearities.Whereas, the linearization approach of the EKFresults only in the first-order accuracy.(16)2LWi(m) Υ i,k k 1ŷ k (21)i 04. Measurements update equations:2LPyk yk Wi(c) Υ i,k k 1 yˆ k Υ i,k k 1 yˆ k TWi(c) χi,k k 1 xˆ k Υ i,k k 1 yˆ k T R v (22)i 02LPx k yk (23)i 0Κ k Px k y k Py 1y(24)xˆ k xˆ k Κ k (y k yˆ k )(25)Pk Pk Κ k Pyk yk Κ Tk(26)k k129

Iran. J. Chem. Chem. Eng.Jafari M.R. & Salahshoor K.In on-line identification, the estimation learning algorithmshould be fast enough to adapt the identification model to anypossible time-varying dynamic changes in the process.The covariance matrix can be initialized with a largevalue. This option, however, causes rapid fluctuationsin the initial neural network parameters estimates andhence endangers the estimator convergence. Besides,choosing small initial covariance matrix will make theestimator adaption very slow. On the other hand, whenthe process dynamic changes, some of the previousestimation information will lose its accuracy as far as thenew process dynamic is concerned. Thus, there should bea means of draining off old information at a controlledrate. One useful way of rationalizing the desired approachis to modify the covariance matrix update relationship(Eq. (27)) as follows:Pk (Pk Κ k Pyk yk Κ Tk ) / ηk(27)Where ηk behaves as the forgetting factor conceptin the usual recursive least squares (RLS) algorithmwhich undergoes the following time-varying evolution:ηk ηk 1 (1 ηk 1 )(1 e t / δ ) , 0 ηk 1(28)where t is the recursive time interval that is spentin the UKF learning algorithm to estimate the GAP-RBFneural network free parameters with fixed structure.Thus, t is reset to zero when any network structuralchange occurs, i.e., neuron creation or pruning, occurs.This scheme maintains a desired parameter adaptivecapability in the UKF algorithm whenever processdynamics undergoes a time-varying change. Because,ηk start with a lower initial value to accelerate theparameter estimation. Then, its value is changedexponentially with a desired time-constant (δ) to a higherfinal value to assure the estimator convergence property.ADAPTIVE NEURAL-BASED MODEL PREDICTIVECONTROLLERSIn this section, two adaptive versions of neural-basedMPC controllers are proposed. Both controllers utilize theMGAP-RBF neural network as the generic non-linearmodel of the process to be controlled.Vol. 30, No. 2, 2011to be controlled in order to predict its response withreasonable accuracy over a finite time horizon.One standard model structure that has been used fornon-linear

popular Generalized Predictive Control (GPC) strategy. Finally, the performances of the proposed adaptive predictive controllers are illustrated on a simulated non-linear Piovoso CSTR. The paper is organized as follows. First, the on-line non-linear system identification using the MGAP-RBF neural network is presented.

Related Documents:

Adaptive Control, Self Tuning Regulator, System Identification, Neural Network, Neuro Control 1. Introduction The purpose of adaptive controllers is to adapt control law parameters of control law to the changes of the controlled system. Many types of adaptive controllers are known. In [1] the adaptive self-tuning LQ controller is described.

Sybase Adaptive Server Enterprise 11.9.x-12.5. DOCUMENT ID: 39995-01-1250-01 LAST REVISED: May 2002 . Adaptive Server Enterprise, Adaptive Server Enterprise Monitor, Adaptive Server Enterprise Replication, Adaptive Server Everywhere, Adaptive Se

predictive analytics and predictive models. Predictive analytics encompasses a variety of statistical techniques from predictive modelling, machine learning, and data mining that analyze current and historical facts to make predictions about future or otherwise unknown events. When most lay people discuss predictive analytics, they are usually .

The predictive cruise control (PCC) concept proposed in this brief utilizes the adaptive cruise control function in a predictive manner to simultaneously improve fuel economy and reduce signal wait time. The proposed predictive speed control mode differs from current adaptive cruise control systems in that be-

Key–Words: Nonlinear Programming, Model Predictive Control, Receding Horizon Controller, Adaptive Control, Fixed Point Transformation 1 Introduction The classical realization of the Model Predictive Con-trollers (MPC) controllers [1, 2] applies the mathe-matical framework of Optimal Control (OC) in which

Procedures Programming Manual, publication 1756-PM001. The term Logix5000 controller refers to any controller that is based on the Logix5000 operating system, such as: CompactLogix controllers ControlLogix controllers DriveLogix controllers FlexLogix controllers SoftLogix5800 controllers

Common Procedures Programming Manual, publication 1756-PM001. The term Logix5000 controller refers to any controller that is based on the Logix5000 operating system, such as: CompactLogix controllers ControlLogix controllers DriveLogix controllers FlexLogix controllers SoftLogix5800 controllers

Agile software development with Scrum is first introduced with its elements. Next, we use three development process lenses (communication, coordination, and control) to study how Scrum supports each of development processes, how they are related each other, and how they affect the performance of Scrum. In the following section, we analyze Scrum practices from social factor theories (social .