Neural Networks For Control - Oklahoma State University-Stillwater

1y ago
29 Views
2 Downloads
531.02 KB
16 Pages
Last View : 15d ago
Last Download : 3m ago
Upload by : Aarya Seiber
Transcription

Neural Networks for ControlMartin T. HaganSchool of Electrical & Computer EngineeringOklahoma State Universitymhagan@ieee.orgHoward B. DemuthElectrical Engineering DepartmentUniversity of Idahohdemuth@uidaho.eduAbstractThe purpose of this tutorial is to provide a quickoverview of neural networks and to explain how theycan be used in control systems. We introduce themultilayer perceptron neural network and describehow it can be used for function approximation. Thebackpropagation algorithm (including its variations)is the principal procedure for training multilayerperceptrons; it is briefly described here. Care mustbe taken, when training perceptron networks, to ensure that they do not overfit the training data andthen fail to generalize well in new situations. Severaltechniques for improving generalization are discused. The tutorial also presents several control architectures, such as model reference adaptivecontrol, model predictive control, and internal modelcontrol, in which multilayer perceptron neural networks can be used as basic building blocks.1. IntroductionIn this tutorial we want to give a brief introductionto neural networks and their application in controlsystems. The field of neural networks covers a verybroad area. It would be impossible in a short time todiscuss all types of neural networks. Instead, we willconcentrate on the most common neural network architecture – the multilayer perceptron. We will describe the basics of this architecture, discuss itscapabilities and show how it has been used in severaldifferent control system configurations. (For introductions to other types of networks, the reader is referred to [HBD96], [Bish95] and [Hayk99].)For the purposes of this tutorial we will look at neural networks as function approximators. As shown inFigure 1, we have some unknown function that wewish to approximate. We want to adjust the parameters of the network so that it will produce the sameresponse as the unknown function, if the same inputis applied to both systems.For our applications, the unknown function may correspond to a system we are trying to control, in whichcase the neural network will be the identified plantmodel. The unknown function could also representthe inverse of a system we are trying to control, inwhich case the neural network can be used to implement the controller. At the end of this tutorial wewill present several control architectures demonstrating a variety of uses for function approximatorneural rror PredictedOutputAdaptationFigure 1 Neural Network as Function ApproximatorIn the next section we will present the multilayerperceptron neural network, and will demonstratehow it can be used as a function approximator.2. Multilayer Perceptron Architecture2.1 Neuron ModelThe multilayer perceptron neural network is built upof simple components. We will begin with a single-input neuron, which we will then extend to multiple inputs. We will next stack these neurons together toproduce layers. Finally, we will cascade the layers together to form the network.2.1.1 Single-Input NeuronA single-input neuron is shown in Figure 2. The scalar input p is multiplied by the scalar weight w toform wp , one of the terms that is sent to the summer.The other input, 1 , is multiplied by a bias b andthen passed to the summer. The summer output n ,often referred to as the net input, goes into a transferfunction f , which produces the scalar neuron outputa . (Some authors use the term “activation function”

rather than transfer function and “offset” ratherthan bias.)Inputsp2.1.2 Multiple-Input NeuronTypically, a neuron has more than one input. A neuron with R inputs is shown in Figure 4. The individual inputs p 1 , p 2 ,. , p R are each weighted bycorresponding elements w 1, 1 ,w 1, 2 ,. ,w 1, R of theweight matrix W .General NeuronAAAAwΣnfInputs Multiple-Input Neuronabp1p2p31a f (wp b)pRFigure 2 Single-Input NeuronAAAAAAAAw1, 1Σw1, Rnfab1The neuron output is calculated asa f (Wp b)a f ( wp b ) .Note that w and b are both adjustable scalar parameters of the neuron. Typically the transfer function ischosen by the designer and then the parameters wand b will be adjusted by some learning rule so thatthe neuron input/output relationship meets somespecific goal.The transfer function in Figure 2 may be a linear ora nonlinear function of n . A particular transfer function is chosen to satisfy some specification of theproblem that the neuron is attempting to solve. Oneof the most commonly used functions is the log-sigmoid transfer function, which is shown in Figure 3.Figure 4 Multiple-Input NeuronThe neuron has a bias b , which is summed with theweighted inputs to form the net input n :n w 1, 1 p 1 w 1, 2 p 2 . w 1, R p R b .(2)This expression can be written in matrix form:n Wp b ,(3)where the matrix W for the single neuron case hasonly one row.Now the neuron output can be written asaa f ( Wp b ) . 1We have adopted a particular convention in assigning the indices of the elements of the weight matrix.The first index indicates the particular neuron destination for that weight. The second index indicatesthe source of the signal fed to the neuron. Thus, theindices in w 1, 2 say that this weight represents theconnection to the first (and only) neuron from thesecond source.n0-1a logsig (n)Log-Sigmoid Transfer FunctionFigure 3 Log-Sigmoid Transfer FunctionThis transfer function takes the input (which mayhave any value between plus and minus infinity) andsquashes the output into the range 0 to 1, accordingto the expression:1a ---------------.–n1 e(4)(1)The log-sigmoid transfer function is commonly usedin multilayer networks that are trained using thebackpropagation algorithm, in part because thisfunction is differentiable.We would like to draw networks with several neurons, each having several inputs. Further, we wouldlike to have more than one layer of neurons. You canimagine how complex such a network might appearif all the lines were drawn. It would take a lot of ink,could hardly be read, and the mass of detail mightobscure the main features. Thus, we will use an abbreviated notation. A multiple-input neuron usingthis notation is shown in Figure 5.

Rx1AAAW1xR1b1x1AAAAAAw1,1an1x11x1f1a f (Wp b)Figure 5 Neuron with R Inputs, Abbreviated NotationAs shown in Figure 5, the input vector p is represented by the solid vertical bar at the left. The dimensions of p are displayed below the variable asR 1 , indicating that the input is a single vector ofR elements. These inputs go to the weight matrixW , which has R columns but only one row in thissingle neuron case. A constant 1 enters the neuron asan input and is multiplied by a scalar bias b . The netinput to the transfer function f is n , which is thesum of the bias b and the product Wp . The neuron’soutput a is a scalar in this case. If we had more thanone neuron, the network output would be a vector.Note that the number of inputs to a network is set bythe external specifications of the problem. If, for instance, you want to design a neural network that isto predict kite-flying conditions and the inputs areair temperature, wind velocity and humidity, thenthere would be three inputs to the network.p12.2.1 A Layer of NeuronsA single-layer network of S neurons is shown in Figure 6. Note that each of the R inputs is connected toeach of the neurons and that the weight matrix nowhas S rows.n1Σa1fb11p2n2Σp3fa2b21pRwS, RΣnSaSfbS1a f(Wp b)Figure 6 Layer of S NeuronsIt is common for the number of inputs to a layer to bedifferent from the number of neurons (i.e., R S ).You might ask if all the neurons in a layer must havethe same transfer function. The answer is no; youcan define a single (composite) layer of neurons having different transfer functions by combining two ofthe networks shown above in parallel. Both networks would have the same inputs, and each network would create some of the outputs.The input vector elements enter the networkthrough the weight matrix W :w 1, 1 w 1, 2 w 1, R2.2. Network ArchitecturesW w 2, 1 w 2, 2 w 2, R Commonly one neuron, even with many inputs, maynot be sufficient. We might need five or ten, operating in parallel, in what we will call a “layer.” Thisconcept of a layer is discussed below.AAAAAAAAAAAAAAAAAAAALayer of S Neurons pRInputsMultiple-Input Neuron Input.(5)w S, 1 w S, 2 w S, RAs noted previously, the row indices of the elementsof matrix W indicate the destination neuron associated with that weight, while the column indices indicate the source of the input for that weight. Thus, theindices in w 3, 2 say that this weight represents theconnection to the third neuron from the secondsource.The layer includes the weight matrix, the summers,the bias vector b , the transfer function boxes and theoutput vector a . Some authors refer to the inputs asanother layer, but we will not do that here.Fortunately, the S-neuron, R-input, one-layer network also can be drawn in abbreviated notation, asshown in Figure 7.Each element of the input vector p is connected toeach neuron through the weight matrix W . Eachneuron has a bias b i , a summer, a transfer functionf and an output a i . Taken together, the outputs formthe output vector a .Here again, the symbols below the variables tell youthat for this layer, p is a vector of length R , W is anS R matrix, and a and b are vectors of length S .As defined previously, the layer includes the weightmatrix, the summation and multiplication operations, the bias vector b , the transfer function boxesand the output vector.

Inputscripts to identify the layers. Specifically, we appendthe number of the layer as a superscript to the namesfor each of these variables. Thus, the weight matrix1for the first layer is written as W , and the weight2matrix for the second layer is written as W . This notation is used in the three-layer network shown inFigure 8.Layer of S NeuronspRx1AA AAAAAAAA AAWnSxRSx11aSx1fbSx1R1As shown, there are R inputs, S neurons in the first2layer, S neurons in the second layer, etc. As noted,different layers can have different numbers of neurons.Sa f(Wp b)Figure 7 Layer of S Neurons, Abbreviated Notation2.2.2 Multiple Layers of NeuronsNow consider a network with several layers. Eachlayer has its own weight matrix W , its own bias vector b , a net input vector n and an output vector a .We need to introduce some additional notation to distinguish between these layers. We will use superInputFirst LayerpRx1Second Layera1n1S1 x 1RA layer whose output is the network output is calledan output layer. The other layers are called hiddenlayers. The network shown in Figure 8 has an outputlayer (layer 3) and two hidden layers (layers 1 and 2).Third LayerAAAAAAAAAAAAAAAAAAAAAAAAAA AAAA AAAA AAW1S1 x R1The outputs of layers one and two are the inputs forlayers two and three. Thus layer 2 can be viewed as12a one-layer network with R S inputs, S S neu122rons, and an S S weight matrix W . The input to12layer 2 is a , and the output is a .S1 x 1S2 x S1f11S1a1 f 1 (W1p b1)S3 x 11b2S3 x 1n3S3 x S2f2S2 x 1a3W3S2 x 1n2S2 x 1b1S1 x 1a2W2f3b3S3 x 1S2a2 f 2 (W2a1 b2)S3a3 f 3 (W3a2 b3)a3 f 3 (W3 f 2 (W2f 1 (W1p b1) b2) b3)Figure 8 Three-Layer Network3. Approximation Capabilities of Multilayer NetworksInputTwo-layer networks, with sigmoid transfer functionsin the hidden layer and linear transfer functions inthe output layer, are universal approximators. Asimple example can demonstrate the power of thisnetwork for approximation.Consider the two-layer, 1-2-1 network shown in Figure 9. For this example the transfer function for thefirst layer is log-sigmoid and the transfer function forthe second layer is linear. In other words,AAAAAAAA AAAAAAAAAAAAAAAALog-Sigmoid 1,2a2b21b121a1 logsig (W1p b1)112- and f ( n ) n .f ( n ) --------------–n1 eLinear Layera2 purelin (W2a1 b2)Figure 9 Example Function Approximation NetworkSuppose that the nominal values of the weights andbiases for this network are

11113w 1, 1 10 , w 2, 1 10 , b 1 – 10 , b 2 10 ,22The network response for these parameters is shown2in Figure 10, which plots the network output a asthe input p is varied over the range [ – 2, 2 ] .Notice that the response consists of two steps, one foreach of the log-sigmoid neurons in the first layer. Byadjusting the network parameters we can change theshape and location of each step, as we will see in thefollowing discussion.The centers of the steps occur where the net input toa neuron in the first layer is zero: 1b1 0 1100-1-2b1– 10p – --------- – --------- 1 ,110w 1, 1-10(a)1w 1, 1-1-22332w 1, 2221100-1-211w 1, 1 p2222w 1, 1 1 , w 1, 2 1 , b 0 .1n131b2-101-1b-12012(d)(c)(7)12-1-220(b)Figure 11 Effect of Parameter Changes on NetworkResponse1111n 2 w 2, 1 p b 2 0 b210p – --------- – ------ – 1 .110w 2, 1(8)The steepness of each step can be adjusted by changing the network weights.32a24. Training Multilayer Networks10-1-2-1012pFigure 10 Nominal Response of Network of Figure 9Figure 11 illustrates the effects of parameter changes on the network response. The nominal response isrepeated from Figure 10. The other curves correspond to the network response when one parameterat a time is varied over the following ranges:2From this example we can see how flexible the multilayer network is. It would appear that we could usesuch networks to approximate almost any function,if we had a sufficient number of neurons in the hidden layer. In fact, it has been shown that two-layernetworks, with sigmoid transfer functions in the hidden layer and linear transfer functions in the outputlayer, can approximate virtually any function of interest to any degree of accuracy, provided sufficientlymany hidden units are available (see [HoSt89]).212– 1 w 1, 1 1 , – 1 w 1, 2 1 , 0 b 2 20 , – 1 b 1 .(9)Figure 11 (a) shows how the network biases in thefirst (hidden) layer can be used to locate the positionof the steps. Figure 11 (b) illustrates how the weightsdetermine the slope of the steps. The bias in the second (output) layer shifts the entire network responseup or down, as can be seen in Figure 11 (d).Now that we know multilayer networks are universal approximators, the next step is to determine aprocedure for selecting the network parameters(weights and biases) which will best approximate agiven function. The procedure for selecting the parameters for a given problem is called training thenetwork. In this section we will outline a trainingprocedure called backpropagation, which is based ongradient descent. (More efficient algorithms thangradient descent are often used in neural networktraining. The reader is referred to [HBD96] for discussions of these other algorithms.)As we discussed earlier, for multilayer networks theoutput of one layer becomes the input to the following layer (see Figure 8). The equations that describethis operation aream 1m 1m 1 m f(Wa bm 0, 1, , M – 1 ,m 1) for(10)where M is the number of layers in the network. Theneurons in the first layer receive external inputs:0a p,(11)

which provides the starting point for Eq. (10). Theoutputs of the neurons in the last layer are considered the network outputs:Ma a .(12)4.1. Performance IndexThe backpropagation algorithm for multilayer networks is a gradient descent optimization procedurein which we minimize a mean square error performance index. The algorithm is provided with a set ofexamples of proper network behavior:{p 1, t 1} , { p 2, t 2} , , {p Q, tQ} ,2 2.Qmnieq q 1 ( tq – aq )T( tq – aq ) .(15)q 1TF̂ ( x ) ( t ( k ) – a ( k ) ) ( t ( k ) – a ( k ) ) e ( k )e ( k ) , (16)where the expectation of the squared error has beenreplaced by the squared error at iteration k . F̂mm- ,w i, j ( k 1 ) w i, j ( k ) – α ----------m w i, j 1) mbi ( k ) F̂– α --------m- , b imm–1m(21) n im – 1 n----------- a j , --------im- 1 .m w i, j b i(22) Thereforem F̂ms i --------m- , n iFor a single-layer linear network these partial derivatives in Eq. (17) and Eq. (18) are conveniently computed, since the error can be written as an explicit F̂m m–1----------- si a j ,m w i, j(24) F̂m--------m- s i . b i(25)We can now express the approximate steepest descent algorithm asmmm m–1w i, j ( k 1 ) w i, j ( k ) – αs i a jwhere α is the learning rate.4.2. Chain Rule(23)(the sensitivity of F̂ to changes in the ith element ofthe net input at layer m ), then Eq. (19) and Eq. (20)can be simplified to(17)(18)mIf we now defineThe steepest descent algorithm for the approximatemean square error ismbi ( k wi, j a j bi .Using a stochastic approximation, we will replacethe sum squared error by the error on the latest target:Tm–1j 1QT(20)The second term in each of these equations can beeasily computed, since the net input to layer m is anexplicit function of the weights and bias in that layer:(14)where x is a vector containing all of network weightsand biases. If the network has multiple outputs thisgeneralizes to eqmS ( t q – aq )(19) F̂ n i F̂--------m- --------m- --------m- . n i b i b iq 1q 1F (x) m n i F̂ F̂-,----------- --------m- ----------mm n i w i, j w i, jQQ eqBecause the error is an indirect function of theweights in the hidden layers, we will use the chainrule of calculus to calculate the derivatives in Eq.(17) and Eq. (18):(13)where p q is an input to the network, and t q is thecorresponding target output. As each input is applied to the network, the network output is comparedto the target. The algorithm should adjust the network parameters in order to minimize the sumsquared error:F (x) linear function of the network weights. For the multilayer network the error is not an explicit functionof the weights in the hidden layers, therefore thesederivatives are not computed so easily.mm,(26)mb i ( k 1 ) b i ( k ) – αs i .(27)In matrix form this becomes:mmmW ( k 1 ) W ( k ) – αs ( am–1 T),(28)

(29)4.5. Generalization (Interpolation & Extrapolation)where the individual elements of s are given by Eq.(23).We now know that multilayer networks are universal approximators, but we have not discussed how toselect the number of neurons and the number of layers necessary to achieve an accurate approximationin a given problem. We have also not discussed howthe training data set should be selected. The trick isto use enough neurons to capture the complexity ofthe underlying function without having the networkoverfit the training data, in which case it will notgeneralize to new situations. We also need to havesufficient training data to adequately represent theunderlying function.mmb ( k 1 ) b ( k ) – αsm,m4.3. Backpropagating the SensitivitiesmIt now remains for us to compute the sensitivities s ,which requires another application of the chain rule.It is this process that gives us the term backpropagation, because it describes a recurrence relationshipin which the sensitivity at layer m is computed fromthe sensitivity at layer m 1 :ssmmMm Ḟ (n ) ( WMM – 2Ḟ (n ) ( t – a ) ,m 1 T m 1) s(30), m M – 1, , 2, 1 (31)whereTo illustrate the problems we can have in networktraining, consider the following general example. Assume that the training data is generated by the following equation:tq g ( pq ) eq ,mf mmḞ ( n ) m( n1 )00 0m( n2 ) 0mf 0(32) 0.(33)where p q is the system input, g( ) is the underlyingfunction we wish to approximate, e q is measurementnoise, and t q is the system output (network target).25m m f ( n S m )2015(See [HDB96], Chapter 11 for a derivation of this result.)a)105In some ways it is unfortunate that the algorithm weusually refer to as backpropagation, given by Eq. (28)and Eq. (29), is in fact simply a steepest descent algorithm. There are many other optimization algorithms that can use the backpropagation procedure,in which derivatives are processed from the last layer of the network to the first (as given in Eq. (31)).For example, conjugate gradient and quasi-Newtonalgorithms ([Shan90], [Scal85], [Char92]) are generally more efficient than steepest descent algorithms,and yet they can use the same backpropagation procedure to compute the necessary derivatives. TheLevenberg-Marquardt algorithm is very efficient fortraining small to medium-size networks, and it usesa backpropagation procedure that is very similar tothe one given by Eq. (31) (see [HaMe94]).We should emphasize that all of the algorithms thatwe will describe in this chapter use the backpropagation procedure, in which derivatives are processedfrom the last layer of the network to the first. Forthis reason they could all be called “backpropagation” algorithms. The differences between the algorithms occur in the way in which the resultingderivatives are used to update the 015b)1050t4.4. Variations of Backpropagation-5-10-15-20-25-30-3Figure 12 Example of Overfitting a) and Good Fit b)

Figure 12 shows an example of the underlying function g( ) (thick line), training data target values t q(large circles), network responses for the training inputs a q (small circles with imbedded crosses), and total trained network response (thin line).In the example shown in Figure 12 a), a large network was trained to minimize squared error (Eq.(14)) over the 15 points in the training set. We cansee that the network response exactly matches thetarget values for each training point. However, thetotal network response has failed to capture the underlying function. There are two major problems.First, the network has overfit on the training data.The network response is too complex, because thenetwork has too many independent parameters (61)and they have not been constrained in any way. Thesecond problem is that there is no training data forvalues of p greater than 0. Neural networks (and allother data-based approximation techniques) cannotbe expected to extrapolate accurately. If the networkreceives an input which is outside of the range covered in the training data, then the network responsewill always be suspect.While there is little we can do to improve the network performance outside the range of the trainingdata, we can improve its ability to interpolate between data points. Improved generalization can beobtained through a variety of techniques. In onemethod, called early stopping, we place a portion ofthe training data into a validation data set. The performance of the network on the validation set is monitored during training. During the early stages oftraining the validation error will come down. Whenoverfitting begins, the validation error will begin toincrease, and at this point the training is stopped.tice that the network response no longer exactlymatches the training data points, but the overall network response more closely matches the underlyingfunction over the range of the training data.Even with Bayesian regularization, the network response is not accurate outside the range of the training data. As we mentioned earlier, we cannot expectthe network to extrapolate accurately. If we want thenetwork to respond accurately throughout the range[-3, 3], then we need to provide training datathroughout this range. This can be more problematicin multi-input cases, as shown in Figure 13. On thetop graph we have the underlying function. On thebottom graph we have the neural network approximation. The training inputs were provided over theentire range of each input, but only for cases wherethe first input was greater than the second input. Wecan see that the network approximation is good forcases within the training set, but is poor for all caseswhere the second input is larger than the first input.Peaks86420-2-4-632312010-1-1-2Another technique to improve network generalization is called regularization. With this method theperformance index is modified to include a termwhich penalizes network complexity. The most common penalty term is the sum of squares of the network weights: eq-3y-3xNNPeaks864QF (x) -2Teq ρ k 2( w i, j )(34)q 120-2This performance index forces the weights to besmall, which produces a smoother network response.The trick with this method is to choose the correctregularization parameter ρ . If the value is too large,then the network response will be too smooth andwill not accurately approximate the underlying function. If the value is too small, then the network willoverfit. There are a number of methods for selectingthe optimal ρ . One of the most successful is Bayesian regularization ([MacK92] and [FoHa97]). Figure 12 b) shows the network response when thenetwork is trained with Bayesian regularization. No--4-632312010-1-1-2y-2-3-3xFigure 13 Two-Input Example of Poor Network ExtrapolationA complete discussion of generalization and overfitting is beyond the scope of this tutorial. The interest-

ed reader is referred to [HDB96], [Hayk99],[MacK92] or [FoHa97].In the next section we will describe how multilayernetworks can be used in neurocontrol applications.5. Control System ApplicationsNeural networks have been applied very successfullyin the identification and control of dynamic systems.The universal approximation capabilities of the multilayer perceptron have made it a popular choice formodeling nonlinear systems and for implementinggeneral-purpose nonlinear controllers. In the remainder of this tutorial we will introduce some of themore popular neural network architectures for system identification and control.5.1. Fixed Stabilizing ControllersFixed stabilizing controllers (see Figure 14) havebeen proposed in [Kawa90], [KrCa90], and [Mill87].This scheme has been applied to the control of robotarm trajectory, where a proportional controller withgain was used as the stabilizing feedback controller.From Figure 14 we can see that the total input thatenters the plant is the sum of the feedback controlsignal and the feedforward control signal, which iscalculated from the inverse dynamics model (neuralnetwork). That model uses the desired trajectory asthe input and the feedback control as an error signal.As the NN training advances, that input will converge to zero. The neural network controller willlearn to take over from the feedback controller.The advantage of this architecture is that we canstart with a stable system, even though the neuralnetwork has not been adequately trained. A similar(although more complex) control architecture, inwhich stabilizing controllers are used in parallelwith neural network controllers, is described in[SaSl92].NNInverse mandInput StabilizingController- PlantPlantOutputFeedbackControlFigure 14 Stabilizing Controller5.2. Adaptive Inverse ControlFigure 15 shows a structure for the Model ReferenceAdaptive Inverse Control proposed in [WiWa96]. Theadaptive algorithm receives the error between theplant output and the reference model output. Thecontroller parameters are updated to minimize thattracking error. The basic model reference adaptivecontrol approach can be affected by sensor noise andplant disturbances. An alternative which allows cancellation of the noise and disturbances includes aneural network plant model in parallel with theplant. That model will be trained to receive the sameinputs as the plant and to produce the same output.The difference between the outputs will be interpreted as the effect of the noise and disturbances at theplant output. That signal will enter an inverse plantmodel to generate a filtered noise and disturbancesignal that is subtracted from the plant input. Theidea is to cancel the disturbance and the noisepresent in the plant.5.3. Nonlinear Internal Model ControlNonlinear Internal Model Control (NIMC), shown inFigure 16, consists of a neural network controller, aneural network plant model, and a robustness filterwith a single tuning parameter [NaHe92]. The neural network controller is generally trained to represent the inverse of the plant, if the inverse exists.The error between the output of the neural networkplant model and the measurement of plant output isused as the feedback input to the robustness filter,which then feeds into the neural network controller.The NN plant model and the NN controller (if it is aninverse plant model) can be trained off-line, usingdata collected from plant operations. The robustnessfilter is a first order filter whose time constant is selected to ensure closed loop stability.

Plant DisturbanceCommandInputNNControllerSensor Noise PlantOutputPlant -AdaptationAlgorithm -NNPlant ModelNoise &Disturbanceat Plant OutputNNInverse PlantModel Tracking Error-ReferenceModelFigure 15 Adaptive Inverse Control SystemCommandInput tPlantNNPlant ModelPredictedPlantOutput -Figure 16 Nonlinear Internal Model Control5.4. Model Predictive ControlModel Predictive Control (MPC), shown in Figure 18,optimizes the plant response over a specified timehorizon [HuSb92]. This architecture requires a neural network plant model, a neural network controller, a performance function to evaluate systemresponses, and an optimization procedure to selectthe best control input.The optimization procedure can be computationallyexpensive. It requires a multi-step ahead calculation, in which the neural network model is used topredict the plant response. The neural network controller learns to produce the input selected by the optimization process. When training is complete, theoptimization step can be completely replaced by theneural network controller.5.5. Model Reference Control or NeuralAdaptive ControlAs with other techniques, the Model Reference Adaptive Control (MRAC) configuration [NaPa90] usestwo neural networks: a controller network and amodel network. (See Figure 17.) The model networkcan be trained off-line using historical plant measurements. The controller is adaptively trained toforce

neural networks. Figure 1 Neural Network as Function Approximator In the next section we will present the multilayer perceptron neural network, and will demonstrate how it can be used as a function approximator. 2. Multilayer Perceptron Architecture 2.1 Neuron Model The multilayer perceptron neural network is built up of simple components.

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

Oklahoma Tax Commission, Motor Vehicle Division, Oklahoma City, Oklahoma. 5 Ibid. 6 Ibid. 7 Oklahoma Department of Public Safety. 8 Oklahoma Department of Transportation. Planning Division, Current Planning Branch, Oklahoma City, Oklahoma. 9 U.S. Census Bureau 20 Population Estimates by Place. CRASH SUMMARY 6 2019 2020 % Change Crashes per

A growing success of Artificial Neural Networks in the research field of Autonomous Driving, such as the ALVINN (Autonomous Land Vehicle in a Neural . From CMU, the ALVINN [6] (autonomous land vehicle in a neural . fluidity of neural networks permits 3.2.a portion of the neural network to be transplanted through Transfer Learning [12], and .

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

(An Alex Rider adventure) Summary: After a chance encounter with assassin Yassen Gregorovich in the South of France, teenage spy Alex Rider investigates international pop star and philanthropist Damian Cray, whose new video game venture hides sinister motives involving Air Force One, nuclear missiles, and the international drug trade. [1. Spies—Fiction. 2. Adventure and adventurers—Fiction .