An Algorithm For Extracting Fuzzy Rules Based On RBF . - 東京大学

1y ago
15 Views
2 Downloads
536.66 KB
8 Pages
Last View : 16d ago
Last Download : 3m ago
Upload by : Oscar Steel
Transcription

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 20061269An Algorithm for Extracting Fuzzy RulesBased on RBF Neural NetworkWen Li and Yoichi Hori, Fellow, IEEEAbstract—A four-layer fuzzy–neural network structure andsome algorithms for extracting fuzzy rules from numeric data byapplying the functional equivalence between radial basis function(RBF) networks and a simplified class of fuzzy inference systemsare proposed. The RBF neural network not only expresses thearchitecture of fuzzy systems clearly but also maintains the explanative characteristic of linguistic meaning. The fuzzy partitionalgorithm of input space, inference algorithm, and parameter tuning algorithm are also discussed. Simulation examples are given toillustrate the validity of the proposed algorithms.Index Terms—Explanative characteristic, fuzzy rules, radialbasis function (RBF) neural network.I. I NTRODUCTIONESSENTIALLY, system modeling is the task of buildingmodels from a combination of a priori knowledge andempirical data. When a complex system is to be modeled,usually, the only available information is a collection of empirical data, which are inherently imprecise and incomplete asobtained from the observation of the system behavior or themeasurement of some system states. There are some types ofmodeling with imprecise and incomplete data, such as fuzzymodeling and rough modeling. Fuzzy modeling based on numerical data, which was first explored systematically by Takagiand Sugeno [7], has found numerous successful applications tocomplex system modeling.Considerable works on hybrids between fuzzy inference andneural networks have been done to integrate the excellentlearning capability of neural networks with fuzzy inferencesystems, resulting in neuro-fuzzy modeling approaches thatcombine the benefits of these two powerful paradigms into asingle capsule and provide a powerful framework to extractgood-quality fuzzy rules from numerical data [2], [8], [9].The error backpropagation neural network (BP) is usedwidely because its learning scheme is visible and easy tounderstand. However, the classification capability is lower forthose patterns away from the sample set or for new patterns.Hence, radial basis function networks (RBFNs) are recentlyadopted widely for fuzzy rules drawing and fuzzy inferencesystem modeling because they possess simple structure, goodlocal approximating performance, particular resolvability, andfunction equivalence with a simplified class of fuzzy inferencesystems [1]. Because of function equivalence, which made itpossible to combine the features of these two systems, a powerful type of neuro-fuzzy systems was developed [2]. However,a fuzzy system that has been trained using learning algorithmsmay lose its interpretability or transparency, which is one of themost important features of fuzzy systems.In this paper, a structure of RBFN, which can represent theinterpretability of fuzzy systems efficiently, is proposed basedon the analysis of RBFNs. Then, the learning algorithms ofextracting fuzzy rules from this RBFN are discussed in detail.II. RBFN C ONSTRUCTION ANDC LASSIFICATION M ECHANISMRBFN belongs to a kind of forward networks that are structured based on the theory of functional approximation. Networklearning is equivalent to searching the best surface matchedwith training data in multidimensional space. The activationfunction in each node of the hidden layers of the networkforms a basis function of the matched surface from which thename of the network originates. It is known that a BP networkis a typical global approximation network, whose networkoutput is decided by all the neurons of the network. Comparedwith the BP network, RBFN is a type of local approximationnetwork, i.e., the network output is decided by a few neuronsexisting in a certain local area in input space. Although thesize of the RBFN is bigger than that of the BP network, itsperformance characteristics such as learning speed, ability forapproximation, pattern recognition, and classification are betterthan the same characteristics of the BP network.A. General RBFN StructureManuscript received August 30, 2004; revised October 18, 2004. Abstractpublished on the Internet May 18, 2006. This work was supported by theScience and Technology Fund of the Ministry of Education of China.W. Li was with the University of Tokyo, Tokyo 153-8505, Japan. She isnow with the Electrical Engineering School, Dalian Jiaotong University, Dalian116028, China (e-mail: lw6017@vip.sina.com).Y. Hori is with the Information and Electronics Division (Electrical ControlSystem Engineering), Institute of Industrial Science, University of Tokyo,Tokyo 153-8505, Japan (e-mail: hori@iis.u-tokyo.ac.jp; y.hori@ieee.org).Digital Object Identifier 10.1109/TIE.2006.878305A three-layer RBFN with N inputs, L nodes (neurons)in the hidden layer, and M neurons in the output layer isshown in Fig. 1. Although RBFNs belong to forward networkmodels because of their structure, the method of initializingparameters is different from the BP model, in which parametersare initialized randomly. The parameters of RBFN such as thecenter and width of receptive fields are determined accordingto the distribution of sample data [3]. Radial basis functions(RBFs) are adopted as active functions of the hidden layernodes, and there are three RBFs commonly used [4]. In this0278-0046/ 20.00 2006 IEEE

1270IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006Fig. 1. RBF network structure.paper, Gaussian function is chosen as the active functions ofthe hidden layer nodes X Cl 2ϕl (X) exp ,σl2l 1, . . . , L(1)where X is an N -dimensional input vector, Cl is a vector(i.e., the center of the Gaussian function) with the same dimension as X, L is the number of nodes in the hidden layer,and σl , which is a scalar quantity, is the width of the Gaussianfunction. If the transfer functions of the output layer nodes arelinear functions, then the output of each output node isyj (X) L wlj ϕl (X),j 1, . . . , M.(2)l 1The normalized output isL yj (X) wlj ϕl (X)l 1L l 1(3)ϕl (X)where wlj denotes the weight between the jth output and thelth node in the hidden layer. The performance index function Eis expressed asE 1 t(y y)22(4)where y is the output of the network and y t is the target value.The structural form of RBFNs, as shown in Fig. 1, is a wholelinkage network. The next section will discuss other networkforms.B. RBFNs for Representing Interpretability ofFuzzy SystemsIt has been proved that RBFN is functional equivalent witha simplified class of fuzzy inference systems [1]. However,the only difference between the two systems is interpretability,Fig. 2.Two kinds of network structures.which makes fuzzy systems easy to understand [5]. Representing a fuzzy system with a general RBFN weakens the outstanding interpretability of fuzzy systems. Therefore, most RBFNsused to extract fuzzy rules or to implement fuzzy inference arethe transformed ones of the whole linkage form in practice.Those transformed RBFNs can improve the interpretability ofnetworks used to express fuzzy systems. In this section, twotransformed RBFNs are given in Fig. 2 [5], [10].The two networks have the same number of input variables.For the structure in Fig. 2(a), the number of nodes in hiddenlayer 1 is equal to the sum of the fuzzy partitions of each inputvariable, and the node linked with an input variable denotesa fuzzy subset of the input variable, which describes clearlythe fuzzy partition of input space. Hidden layer 2 is used toimplement the algorithm of fuzzy inference, and the number ofnodes is the number of fuzzy rules, i.e., each node is associatedwith a fuzzy rule. The overall outputs are acquired from theoutput layer. In addition, for the structure in Fig. 2(b), thenumber of nodes in hidden layer 1 is equal to the product ofpremise-number and rule-number; the n nodes connected toa node in hidden layer 2 form the premise part of a fuzzyrule. Therefore, the components of fuzzy rules can be describedclearly with the two hidden layers. Simultaneously, hiddenlayer 2 is also the fuzzy inference layer. In addition, the outputlayer has the same function as that of the structure in Fig. 2(a).From the preceding comparison and analysis, it can be observed that the structure in Fig. 2(a) is not only simple but alsocapable of describing clearly the fuzzy partitions of input space.However, the more input variables, the more difficult to expressfuzzy systems and the worse the interpretability. With regardto expressing fuzzy partitions in input space, although structureFig. 2(b) is not clearer than the structure in Fig. 2(a), it hasbetter representational power on the interpretability of fuzzysystems than the structure in Fig. 2(a), which makes the described fuzzy systems more comprehensible than conventionalneural systems.C. New Network StructureAccording to the analysis and comparison of structures andnatures of the two networks in Fig. 2, it can be observed that the

LI AND HORI: ALGORITHM FOR EXTRACTING FUZZY RULES BASED ON RBF NEURAL NETWORK1271Fig. 4. Definition of overlap degree.algorithm, and an inference output algorithm, which are givenin following subsections.A. Fuzzy Partition Algorithm of Input SpaceFig. 3.New RBF neural network.hidden layer 1 of the structure in Fig. 2(b) is only an extensionof the hidden layer 1 of the structure in Fig. 2(a). Based onthis idea, a new structure of RBFN is proposed for the purposeof improving the interpretability of fuzzy systems. Fig. 3 givesthe new network structure, which integrates the natures of thetwo networks shown in Fig. 2. The new structure cannot onlyrepresent the fuzzy partitions of input space clearly but canalso give the formal description of fuzzy systems intuitively.Compared with the structure in Fig. 2(a), for the new structure,the representational power of the interpretability is improvedgreatly, and the ability to clearly express fuzzy partitions ismaintained. Because the weights between hidden layer 1 andhidden layer 2 are 1, the performance of the running network isnot changed in nature. Although the structure increases memoryunits outwardly, the parameter set of hidden layer 2 is, in fact,equal to that of hidden layer 1. Therefore, a quick running speedof the network and the small memory units may be obtained byapplying a suitable learning algorithm and storing method. Thenetwork structure and the method of selecting initial parametershave been discussed in the literature [6].III. L EARNING A LGORITHM D ESIGNLet X (x1 , x2 , . . . , xN ) denote the N -dimensional inputspace, where xi i 1, 2, . . . , N is an input variable; andY (y1 , y2 , . . . , yM ) denote the M -dimensional output space,where yj j 1, 2, . . . , M is an output variable. According tothe network structure, the learning algorithm is composed ofa fuzzy partition algorithm of input space, a fuzzy inferenceThe input layer and hidden layer 1 of the network form thefuzzy partition part, and the corresponding algorithm is usedto implement the fuzzy partition of input space. Each inputnode xi is connected to the corresponding si nodes in hiddenlayer 1, and si denotes the number of fuzzy partition for variable xi . In this paper, s (s1 , s2 , . . . , sN ) denotes the numberof all the fuzzy partitions , notation ciki ki 1, 2, . . . , si denotes the weights from the ith input node to the ki th node inhidden layer 1, and ki denotes the ki th fuzzy partition of the ithinput variable. The fuzzy partition labels of N input variablesare denoted by notation k (k1 , k2 , . . . , kN ), where each kicorresponds to si (s1 , s2 , . . . , sN ).The Gaussian function (a kind of RBF) is adopted as thetransfer functions of the nodes in hidden layer 1; hence, theoutput of the nodes can be written as2 (xi ciki )2 /σikfi,ki (xi ) ei(5)where ciki and σiki are the center and width of the Gaussianfunction, respectively. The initial value of ciki is determined bythe initial clustering center [6].In order to determine parameter σiki , a conception of overlapdegree is introduced. The overlap degree is the degree by whichtwo fuzzy subsets overlap, which is measured by the maximummembership degree of intersection produced by the two subsets.As shown in Fig. 4, for example, the overlap degree is 0.5.In fuzzy control, overlap degree is an important factor thataffects control performance. Generally, overlap degree shouldbe around 0.5; a value that is too big or too small may result inan unexpected control effect. Considering this case, a formulafor determining the initial width of the Gaussian function isdeduced.Let the distance between two adjacent clustering centers bed ci ci 1 , corresponding to clustering center ci , and the22RBF be fi (xi ) e (xi ci ) /σi . In the selection of width σi , theoverlap degree of adjacent fuzzy subsets should remain to bearound 0.5. Therefore, when xi ci (d/2),0.3679 fi (xi ) 0.7788(6)

1272IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006Fig. 5. Fuzzy partition when membership functions are symmetric.Fig. 6.i.e., 1/4 ((xi ci )2 /σi2 )((d/2)2 /σi2 ) 1. Thus, for theRBFN shown in Fig. 3, width σi should be selected by thefollowing equation:σi ci ci 1 ,γ1 γ 2(7)where γ is the overlap coefficient.Usually, a clustering center corresponds to a fuzzy subsetusing an RBF as its membership function. Taking Fig. 5 as anexample, there are four clustering centers c1 3.5, c2 0,c3 1, and c4 3; each corresponds to an RBF with σi ci ci 1 /γ, γ 1.5, which define the fuzzy subsets (fuzzypartitions) at region [-4, 4]. As shown in Fig. 5, becausethe different intervals of adjacent clustering centers result indifferent widths σi 1 and σi from (7), (6) cannot always besatisfied. In order to keep the overlap degree of the adjacentfuzzy subsets to about 0.5, the membership functions should bereconstructed as fi (xi )e (xi ci )22/σil2 (xi ci )2 /σire,xi ci,xi ci(8)where ci ci 1 σil γσir ci 1 ci .γFuzzy partition when membership functions are nonsymmetric.The input layer and hidden layer 1 describe clearly the fuzzypartition status of each dimension in input space for a controlsystem, i.e., the definition of the fuzzy subsets (fuzzy linguisticvalues) of each input variable. Supposing the crisp input isx0 (x01 , x02 , . . . , x0N ), the membership degree of each variablethat belonged to various fuzzy subsets can be calculated using(11). Hence, the fuzzification process of the input variables iscompleted.B. Forward Inference AlgorithmThe inference task of fuzzy systems is implemented byhidden layer 2 and hidden layer 3. The L node groups del,note L rules. In each group, there are N input nodes Pikii 1, 2, . . . , N ki (1, 2, · · · , si ), which correspond to the Nldenotes that the ithpremises of the lth rule. Notation Pikipremise of the lth rule takes the ki th linguistic value Aliki . Whenlthere is a crisp input x0 (x01 , x02 , . . . , x0N ), Pikshould be thei0lmembership degree of xi that belongs to Aiki . In other words,lfor l outputs in hidden layer 2 Pik, l 1, · · · , L, the followingiequation exists:l fi,kiPiki(9)(10)Equation (8) gives the mathematical description of nonsymmetry membership function. As long as membership functionsare defined by (8) and the selection of widths σil and σir aredefined by (9) and (10), it can be ensured that the overlapdegree of two adjacent fuzzy subsets satisfies (6), no matter howdifferent the two neighbored intervals of adjacent clusteringcenters are. Fig. 6 shows a fuzzy partition case that has the sameconditions as that in Fig. 5, except that the membership functions are described by (8). With (8) reconstructing RBF, (5) isrewritten as (x c )2 /σ2il,ki, xi cikie i iki.(11)fi,ki (xi ) 22 (xi ciki ) /σir,ki,exi ciki x0i e(x0i ci,ki )σ2i,ki2,l 1, . . . , L.(12)The transfer function of each node Rl in hidden layer 3(inference layer) is determined according to the operatingmethod of the fuzzy implication relation selected. In this paper,the product operation is selected as the transfer function of nodeRl ; thus, the output of node Rl can be given byRlN i 1lPik.i(13)The M outputs of system yj , j 1, . . . , M , are composedof the M consequents of a fuzzy rule. Using weight vlj betweenthe inference layer and the output layer to denote the jthconsequent of the lth rule, the initial vlj is determined bythe initial cluster center of sample data [6]. The output of thenetwork may be represented with a general form given byyj f (vlj , Rl ),j 1, . . . , M.(14)

LI AND HORI: ALGORITHM FOR EXTRACTING FUZZY RULES BASED ON RBF NEURAL NETWORKSummarizing the preceding discussion, if input x0 , singletonfuzzification, product inference mechanism, and center-averagedefuzzification are used, then the output of the network isL vlj Rlyj0 l 1L RlThe outputs of the nodes in hidden layer 3 are computed by(13). All weights between hidden layer 2 and hidden layer 3are 1 without modifying. Those nodes in hidden layer 2 arecomputed by (12).The modifying formula of weights between the input layerand hidden layer 1 is derived asl 1L vlj ci,ki ηNi 1l 1L N l 1 i 11273lPik(x0 )i.(15) ηl (x0 )Piki E ci,kil E yj Rl Pi,ki.l yj Rl Pi,k ci,kii(22)Each output yj relates with all Rl . Hence,IV. N ETWORK M ODIFYING A LGORITHM AND S TEPSA. Modifying Algorithm of Network ParametersObtaining network output yj0 from (15), the overall error ofthe network output can be calculated by performance indexfunction (16)ME 1 ty yj02 j 1 j2 E E yj η vlj yj vlj where 0 η 1 is the learning rate, and the partial derivativeof E with respect to yj , for a specific j (jth output), is E yjt yj . yjl 1(16)(17)(18) L vlj Rl M yj l 1 L Rl j 1 Rl Rl .The network parameters are modified via a gradient descenttechnique. The modifying algorithms of weights in each layerare derived as follows.The expression of modifying weights between hidden layer3 and the output layer is vlj η M L l 1 vlj Rlj 1RlL l 1 L L Rl vlj RlRll 1 L l 1l 1 2Rl.RlFor a specific l,M yjvlj yj L Rlj 1Rll(23)ll 1 Rl ll Pi,ki Pi,ki N i 1 lPi,ki.(24)For a specific i, (24) is changed toLetδj E. yj(19)N Rll Pii,kil Pi,kii 1i LBecause yj ( Ll 1 Rl vlj )/(l 1 Rl ), the following equation exists for a specific l: yjRl L. vljRt(20)land for Pi,k/ ci,ki with specific i and ki , there existsil Pi,kiFrom (18)–(20), (17) becomes vlj ηδjii i t 1 ci,kiRl.L Rtt 1(25)2 (x0i ci,ki ) e ci,ki 2 x0i ci,ki lPi,ki .2σi,ki(21)σ2i,ki (26)

1274IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006Consideringchanged to ci,ki(19),(23),(25),and(26),(22)is N M vlj yj 2 x0i ci,ki ll ηδjPii,ki Pi,k2iL σi,kii 1ij 1Rllii ill 1N2(x0i ci,ki ) lPii,ki2σi,kiii 1 2 x0i ci,ki ηδlRl2σi,ki ηδl(27)whereδl M δjj 1vlj yj.L RllFig. 7.Simulation curve using group 1 data.Fig. 8.Simulation curve using group 2 data.Fig. 9.Simulation curve using group 3 data.(28)ll 1Similarly, the following equation can be derived: σi,ki η η E σi,kiMl E yj Rl Pi,kil y R σ Pjli,kii,kij 1 2 2 x0i ci,ki ηδlRl .3σi,ki(29)B. Modifying Steps of Network ParametersAccording to Section III-A, the modifying steps of networkparameters can be summarized in the following.For vlj , the modifying steps are given as follows:Step 1)Step 2)Step 3)Step 4)computing δj by (18) and (19);computing yj / vlj by (20);computing vlj by (21);updating vlj by (30)vlj (t 1) vlj (t) vlj (t).(30)For ci,ki , the modifying steps are given as follows:Step 1) computing δl by (28);Step 2) computing ci,ki by (27);Step 3) updating ci,ki by (31)ciki (t 1) ciki (t) ciki (t).(31)For σi,ki , the modifying steps can be obtained from (29),and to update σi,ki by (32),σiki (t 1) σiki (t) σiki (t).It should be noticed that σi,ki should satisfy (7).(32)Fig. 10.Simulation curve using group 4 data.

LI AND HORI: ALGORITHM FOR EXTRACTING FUZZY RULES BASED ON RBF NEURAL NETWORKTABLE ISIMULATION RESULT FOR GROUP 11275TABLE IVSIMULATION RESULT FOR GROUP 4V. E XAMPLE AND C ONCLUSIONTABLE IISIMULATION RESULT FOR GROUP 2In order to validate the validity of the algorithm, a simulationis made with the functiony 64 81((x1 0.6)2 (x2 0.5)2 )/9 0.5.Having training by 1000 samples, four groups of curvesabout the network output are shown in Figs. 7–10, and thecorresponding data are given in Tables I–IV, in which x1 ,x2 , y t , and y 0 denote input 1, input 2, target output, andnetwork output, respectively. Each group has 100 test points.The average relative error is less then 5%.As shown in the figures and tables, the algorithm for extracting fuzzy rules based on RBFN is effective.R EFERENCESTABLE IIISIMULATION RESULT FOR GROUP 3[1] J. S. R. Jang and C. T. Sun, “Functional equivalence between radial basisfunctions and fuzzy inference systems,” IEEE Trans. Neural Netw., vol. 4,no. 1, pp. 156–158, Jan. 1993.[2] ——, “Neurofuzzy modeling and control,” IEEE Trans. Fuzzy Syst.,vol. 3, no. 3, pp. 378–406, Mar. 1995.[3] Q. Zhao and Z. Bao, “On the classification mechanism of a radial basisfunction network,” J. China Inst. Commun., vol. 17, no. 2, pp. 86–93,Mar. 1996.[4] B.-T. Miao and F.-L. Chen, “Applications of radius basis function neuralnetworks in scattered data interpolation,” J. China Univ. Sci. Technol.,vol. 31, no. 2, pp. 135–142, Apr. 2001.[5] Y. Jin and B. Sendhoff, “Extracting interpretable fuzzy rules from RBFnetworks,” Neural Process. Lett., vol. 17, no. 2, pp. 149–164, Apr. 2003.[6] H. Sun and W. Li, “A method of selection initial cluster centers for clusterneural networks,” J. Syst. Simul., vol. 16, no. 4, pp. 775–777, Apr. 2004.[7] T. Takagi and S. M. Sugeno, “Fuzzy identification of systems and itsapplication to modeling and control,” IEEE Trans. Syst., Man, Cybern.,vol. SMC-15, no. 1, pp. 116–132, Jan./Feb. 1985.[8] L. X. Wang and J. Mendel, “Generating fuzzy rules by learning fromexamples,” IEEE Trans. Syst., Man, Cybern., vol. 22, no. 6, pp. 1414–1427, Nov./Dec. 1992.[9] D. A. Linkens and M.-Y. Chen, “Input selection and partition validationfor fuzzy modeling using neural network,” Fuzzy Sets Syst., vol. 107,no. 3, pp. 299–308, Nov. 1999.[10] X. Lin, “Fast extracting fuzzy if–then rules based on RBF networks,” Syst.Eng.—Theory Methodology Appl., vol. 10, no. 2, pp. 145–149, 2001.

1276IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006Wen Li received the B.S. degree in industry automation and the M.S. degree in railway tractionelectrization and automation from Dalian JiaotongUniversity, Dalian, China, in 1982 and 1992, respectively, and the Eng.D. degree in control theoryand control engineering from Harbin Institute ofTechnology, Harbin, China, in 1999.Since 1982, she has been with the Department ofElectrical Engineering, Dalian Jiaotong University,where she has been a Professor since 2001. She wasa Visiting Researcher with the University of Tokyo,Tokyo, Japan, from October 2004 to October 2005. Her research interestsinclude intelligent control, fuzzy systems modeling, and control theory and itsindustrial application.Prof. Li is a member of the China Railway Society and the China Electrotechnical Society.Yoichi Hori (S’81–M’83–SM’00–F’05) received theB.S., M.S., and Ph.D. degrees in electrical engineering from the University of Tokyo, Tokyo, Japan, in1978, 1980, and 1983, respectively.In 1983, he joined the Department of ElectricalEngineering, University of Tokyo, as a ResearchAssociate and was later promoted to Assistant Professor, Associate Professor, and in 2000, Professor.In 2002, he joined the Institute of Industrial Science,University of Tokyo, as a Professor in the Information and Electronics Division (Electrical ControlSystem Engineering). During 1991–1992, he was a Visiting Researcher withthe University of California, Berkeley (UCB). His research interests includecontrol theory and its industrial application to motion control, mechatronics,robotics, and electric vehicles.Prof. Hori served as the Treasurer of the IEEE Japan Council and TokyoSection during 2001–2002. He is currently the Vice President of the Instituteof Electrical Engineers of Japan (IEE-Japan) Industry Applications Society. Heis a member of IEE-Japan, the Japan Society of Mechanical Engineers, theSociety of Instrument and Control Engineers, the Robotic Society of Japan,the Japan Society of Mechanical Engineers, and the Society of AutomotiveEngineers of Japan.

a fuzzy subset of the input variable, which describes clearly the fuzzy partition of input space. Hidden layer 2 is used to implement the algorithm of fuzzy inference, and the number of nodes is the number of fuzzy rules, i.e., each node is associated with a fuzzy rule. The overall outputs are acquired from the output layer.

Related Documents:

ing fuzzy sets, fuzzy logic, and fuzzy inference. Fuzzy rules play a key role in representing expert control/modeling knowledge and experience and in linking the input variables of fuzzy controllers/models to output variable (or variables). Two major types of fuzzy rules exist, namely, Mamdani fuzzy rules and Takagi-Sugeno (TS, for short) fuzzy .

with ellipsoidal shape. Then, a fuzzy clustering algorithm for relational data is described (Davé and Sen,2002) Fuzzy k-means algorithm The most known and used fuzzy clustering algorithm is the fuzzy k-means (FkM) (Bezdek,1981). The FkM algorithm aims at discovering the best fuzzy

fuzzy controller that uses an adaptive neuro-fuzzy inference system. Fuzzy Inference system (FIS) is a popular computing framework and is based on the concept of fuzzy set theories, fuzzy if and then rules, and fuzzy reasoning. 1.2 LITERATURE REVIEW: Implementation of fuzzy logic technology for the development of sophisticated

Different types of fuzzy sets [17] are defined in order to clear the vagueness of the existing problems. D.Dubois and H.Prade has defined fuzzy number as a fuzzy subset of real line [8]. In literature, many type of fuzzy numbers like triangular fuzzy number, trapezoidal fuzzy number, pentagonal fuzzy number,

Fuzzy Logic IJCAI2018 Tutorial 1. Crisp set vs. Fuzzy set A traditional crisp set A fuzzy set 2. . A possible fuzzy set short 10. Example II : Fuzzy set 0 1 5ft 11ins 7 ft height . Fuzzy logic begins by borrowing notions from crisp logic, just as

of fuzzy numbers are triangular and trapezoidal. Fuzzy numbers have a better capability of handling vagueness than the classical fuzzy set. Making use of the concept of fuzzy numbers, Chen and Hwang [9] developed fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) based on trapezoidal fuzzy numbers.

ii. Fuzzy rule base: in the rule base, the if-then rules are fuzzy rules. iii. Fuzzy inference engine: produces a map of the fuzzy set in the space entering the fuzzy set and in the space leaving the fuzzy set, according to the rules if-then. iv. Defuzzification: making something nonfuzzy [Xia et al., 2007] (Figure 5). PROPOSED METHOD

standard on Asset Management, is a key waypoint on the journey to Asset Management Excellence, and the maturity scale is aligned with the Institute of Asset Management's (IAM's) de nition of Asset Management Excellence (see www.theiam.org): This is broadly the equivalent of ISO 55001 (or BSI PAS 55) compliance. If the organisation can demonstrate its processes are also e ective and .