ISSN: 1992-8645 ESTABLISHING STRUCTURE FOR ARTIFICIAL .

2y ago
25 Views
2 Downloads
334.28 KB
6 Pages
Last View : 21d ago
Last Download : 3m ago
Upload by : Ronan Garica
Transcription

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-8645www.jatit.orgE-ISSN: 1817-3195ESTABLISHING STRUCTURE FOR ARTIFICIAL NEURALNETWORKS BASED-ON FRACTAL11Yang Zong-changSchool of Information and Electronical Engineering,Hunan University of Science and Technology, Xiangtan 411201, ChinaE-mail: 1yzc233@163.comABSTRACTThe artificial neural network (ANN) is a widely used mathematical model composed of interconnectedsimple artificial neurons, which has been applied in a variety of applications. However, how to determinenumber of neurons in the hidden layers is an important part of deciding overall neural network architecture.Many rule-of-thumb methods for determining the appropriate number of neurons in the hidden layers aresuggested. In this study, to the puzzling problem of establishing structure for the Artificial Neural Networks(ANN), from a microscopical view, two concepts called the fractal dimension of connection complexity(FDCC) and the fractal dimension of the expectation complexity (FDEC) are introduced. Then a criterionreference for establishing ANN structure based on the two proposed concepts is presented that, the FDCCmight not be lower than its (FDEC), and when FDCC is equal or approximate to FDEC, the ANN structuremight be an optimal one. The proposed criterion is examined with good results.Keywords: Artificial Neural Networks, Structure Establishing, Connection Complexity, Fractal Dimension.1.INTRODUCTIONAn artificial neural network (ANN), also called aneural network, is a widely used mathematicalmodel composed of an interconnected group ofsimple artificial neurons that also called nodes,neuroses, processing elements or units, areconnected together to form a network withmimicking a biological neural network. Artificialneural network (ANN) uses a connectionistapproach to computation in processing information,and is used with algorithms designed to change thestrength of the connections in the network to yield adesired signal flow. In most cases, an artificialneural network can be seen as an adaptive systemthat alters its structural weights during a learningstep. Artificial neural network is widely used tomodel complex relationships between its inputs andoutputs, and complex global behavior can bedetermined by the connections between itsprocessing elements and element parameters in thenetwork.Since its renaissance in early 1980s, artificialneural networks (ANN) research has received agreat deal of attention from the science andtechnology circles over the world [1-7]. Until now,besides so much attention has been given ANN, ithas also been reported fairly good performances forits nonlinear learning capability [1-7]. However, todetermine number of neurons in hidden layers [818] is a very important part of deciding overallneural network architecture for many practicalproblems employing neural networks [19-20].However, how to determine its structure especiallyof the hidden layer is a puzzling problem. Manymethods [8-18] for determining the appropriatenumber of neurons to use in the hidden layers areintroduced with varied degrees of success, such as,a method for estimating the number of hiddenneurons based on decision-tree algorithm in [9], anetwork structure equation by error function in[11], and one hidden layer train algorithm methodon energy space approaching strategy in [12], andan algorithm using an incremental trainingprocedure in [15], and some guidelines based on ageometrical interpretation of the multilayerperceptron (MLP) for selecting the architecture ofthe MLP in [16], and employing the singular vectordecomposition to estimate the number of hiddenneurons in a feed-forward neural network in [17],and some rule-of-thumb methods in [18]. Amongthe proposed solutions for this problem, some eitherfocus on the special training procedures that needsa large amount of operations and inconvenient forengineering applicability, or rule-of-thumb methodsthat short of generality.342

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-8645www.jatit.orgA fractal can be view as a mathematical set,which usually has a fractal dimension exceeding itstopological dimension and may be one fractiondimension falling between two integers. Fractalsare typically self-similar patterns, where selfsimilar indicates that fractals may be exactly thesame measured at various scales.Addressing the important and puzzling problemthat to determine number of neurons in the hiddenlayers when deciding overall neural networkarchitecture, from a more macroscopical view, afractal-based approach is investigated in the study.2.FRACTAL AND FRACTAL DIMENSIONIn the well-known problem of the length ofBritish coastline, the author of the paper [21]-Mandelbrot discussed the research published byRichardson. Richardson had observed and foundthe famous formula as follows:L( r ) Kr1 D fE-ISSN: 1817-3195layer of 2n 1 neurons and m outputs, its structure,namely (n, 2n 1,m). The case is under the idealcondition for I [0, 1].From the Mapping Neural Network ExistenceTheorem [22], it indicates the inherent property ofmapping of an ANN [23, 24].Suppose an ANN with N inputs and M outputs,by ignoring the hidden layers and specificstructures, we can get a simplified topologicalstructure of the neural network. In the simplifiedtopological structure, with only 2 layers that theinput layer and output layer is considered, itsexpectation implementation function can be seen asone kind of “mapping” function and illustrated inFig.1, where AN denotes the input layer/unit and AMdoes the output layer/unit,ΑN AMMapping(1)Deep meaning for the exponent Df was notspecified by Richardson. In the paper of [21],Mandelbrot discussed self-similar curves, whichhave fractional dimensions between 1 and 2. Thisintroduced concept provides a new vision fordescribing many objects around us that havestructure on many sizes, whose normal examplesinclude coastlines, plant distributions and rivers,architecture, etc.By taking logarithm to Eq.(1) and makingnecessary mathematical operations, we get,D f log( L( r )) / log(1 / r )Fig. 1 Simplified topological structure of neuralnetworksThe fractal dimension is a mathematical concept,which measures the geometrical complexity of anobject. In follows, we try to propose twoconceptions based on the fractal dimension forANN:Definition1: we define ΩE as the called“Expectation Complexity” of the neural network inthe simplified topological structure (Fig.1) byΩ E (1 ρ E k ) S E (k 0)(2)(3)S N Size( AN ) denotes size ofand S M Size( AM ) for the outputSimply speaking, fractals are statistically selfsimilar. Where, self-similar means that fractals maybe exactly the same measured at various scales.Where,inputlayerlayer.Inspired, we present a fractal-based solution fordetermining number of neurons in the hidden layersof ANN in the following section.The ratio3. TWO CONCEPTS OF FRACTALDIMENSION FOR ANNρ E Max ( S N , SM ) / Min( S N , S M )indicates its mapping complexity that is expected tobeimplemented,andS E S( N , M ) Size( AN , AM ) size of both theinput and output layers, denotes its structuralcomplexity, and the user-defined parameter k 0.The Mapping Neural Network ExistenceTheorem[22] states that, given any continuousfunction, Φ:IN RM ,Y Φ(X ) ( I [0,1] ), where Xand Y are vectors with n and m componentsrespectively. This function can be implemented bya 3-layer neural network with n inputs, one hidden-In the simplified topological structure (Fig.1), theExpectation Complexity ΩE indicates ameasurement for the ANN with the 2 layers in thesimplified topological structure considered. Wefetch its scale of measurement (γE) by1/ γ E (1 1/ ρ E ) , with consideration of343

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-8645possibility oftaken γ Ewww.jatit.orgE-ISSN: 1817-3195ρ E 1 that log(γ E ) 0 ifΩC {Ω1, Ω2 ,., Ωl 1} ρE .(5)In the case, the connection complexity ΩC ismeasured on l 2 layers thatANN ( AN , H 1, H 2, ., H l , AM ) , contains l 1Then we define a called “Fractal Dimension ofExpectation Complexity” by,DE Log (Ω E ) / Log (1/ γ E ) (4)connected sub-structures (Fig.3).According to Eq.(2), we have,The typical structure of multi-layer completelyconnected neural networks consists of the inputlayer, hidden layers and output layer (Fig. 2).Ω i (1 ρ i k ) S ( H i , H i 1 )Where,Input Layer Hidden Layer Output LayerHXY(6)(1 i l 1)ρi size( H j )( H j Ωi ) /( N M ) ,,S( H i , Hi 1 ) size( H i , H i 1 ) .Finally, we define DC as the called “FractalDimension of Connection Complexity”, which issum of the fractal dimensions on all its substructures:l 1Dc log(Ωi ) / log(1/ γ i )Fig. 2 Structure of multi-layer neural networksWhere, in Eq.(7), fetchSuppose N inputs in the input layer (unit), Moutputs in the out layer (unit), and l hidden layers(units) H i (i 1.l ) . There is l 2 layers in thestructure of ANN,. Hl , AMΩl 1)Fig. 3 Topological structure of Multi-layer neuralnetworkIn Fig.3, it shows that the topological structure hasl 1 connected sub fractal structures from (AN, H1)to (H1, AM). Then we define its called ConnectionComplexity as follows,Definition2: Define ΩC called “ConnectionComplexity” for the neural network with multilayer connected topological structure (Fig.3), andthe connection complexity of each sub-structure asΩi (1 i l 1),by1/ γ i (1 1/ ρ i )log(γ i ) 0 if taken γ i ρ i .4.Ll 2 size (AM) M, and so on.AN ( H1 . .Ω1,γiwith consideration of possibility of ρ i 1 that( AN , H1, H 2, ., H l , AM ) (Fig.3).Number of neurons in each layeris, ( L1, L2, ., Ll 1 , Ll 2 ) where L1 size (AN) N,(7)i NING OF ANN PARAMETERSBased on the presented two concepts of fractaldimension for ANN in Section2, namely the fractaldimension of expectation complexity (DE)((Eq.(4))and fractal dimension of connection complexity(DC)(Eq.(7)) , we propose a criterion reference forestablishing ANN structure as follows,To establish ANN structures, the fractal dimensionof connection complexity (Dc) might not be smallerthan its fractal dimension of expectation complexity(DE),Dc DE(8)When the fractal dimension of connectioncomplexity (Dc) is equal or approximate to itsfractal dimension of expectation complexity (DE),i.e., Dc DE or Dc DE, the established structuremight be an optimal one.344

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-86455.www.jatit.orgTABLE 4: Experimental Result (3-Layer ANN)Comparing With That Of The Reference Of [12]EXPERIMENTAL RESULTSANN structure: (N 2, H, M 3)1235Size(H)42.3493-4.0274 -2.8298 -1.3633 0.3712 D678910Size(H)4.5408 6.9172 9.4542 12.132214.9352 DSize(H*) (H*, our optimal)4Our suboptimal, size(H*)4,3The choice(result) in the reference of3[12]The presented criterion reference for establishingANN structure is applied to the following tests.In the references of [9-13], with specific cases,the authors reported their optional choices forestablishing the structures of artificial neuralnetworks based on their presented methods.Comparing with their choices of the references[9-13], our solutions based on our proposed fractalbased criterion for establishing the structures of theANNs given in the references of [9-13], are listedin Table1-Table5, respectively, where fetch k 2 inEq.(3) and Eq.(6), and N, M denotes the ANN withN nodes in the input-layer and M nodes in theoutput-layer, and H denotes the hidden-layer size tobe determined.TABLE 5: Example Of Establishing Structure For MultiLayer ANN [11]The input-layer size N 3, the output layer size M 1DE 12.8228, l (number of hidden 913.804911.1700Dc79 D3.0750.55410.9822-1.65282Our optimal(3,7,1)(1,4,3,1)(3,2,2,4,1) orstructures(3,2,2,3,1)Structure in the Reference of [11](3,2,2,3,1) [3]TABLE 1: Experimental Result (3-Layer ANN)Comparing With That Of The Reference Of [9]Size(H)1ANN structure: (N 8, H, M 7)234-4.0916 -3.5181-2.9701 D6Size(H)87-1.2463 -0.6237 D0.0266Size(H*) (H*, our optimal)Our suboptimal, size(H*)The choice(result) in the reference of[9]5-2.415390.7051-1.8425101.411988 ,7 or 98 or 7TABLE 2: Experimental Result (3-Layer ANN)Comparing With That Of The Reference Of [10]Size(H)ANN structure: (N 4, H, M 2)1234-6.9686 -5.9439 -4.7459 D68Size(H)7-0.0665 1.81033.8178 DSize(H*) (H*, our optimal)Our suboptimal, size(H*)The choice(result) in the reference of[10]5-3.360995.9417-1.796108.169477 , 6 or 89TABLE3: Experimental Result (3-Layer ANN)Comparing With That Of The Reference Of [11]Size(H)1ANN structure: (N 2, H, M 1)234E-ISSN: 1817-31955-5.2344 -3.1643 -0.3569 3.05016.9294 D678910Size(H)11.1876 15.7592 20.5976 25.668130.9446 DSize(H*) (H*, our optimal)4Our suboptimal, size(H*)4 , 3 or 5The choice(result) in the reference of5[11]From results in Table.1-Table 5, it is shown thatthe presented criterion based the fractal forestablishing the structure of ANN yields satisfyingresults, which are agree well with their solutionsreported in the references [9-13].6.CONCLUSION AND FUTURE STUDYArtificial neural network is a well-knowncomputational model that composed of aninterconnected group of simple artificial neurons,tries to simulate some properties of biologicalneural networks with the aim of solving particulartasks.In the artificial intelligence field, artificial neuralnetworks have been applied successfully to a widevariety of fields. However, how to determine thenumber of neurons in hidden layers is a veryimportant part of deciding overall neural networkarchitecture for many practical problems employingneural networks.The fractal is one classical mathematical conceptthat fractals are typically self-similar patterns,where self-similar indicates that fractals may beexactly the same at varied scales. Fractal patternswith various degrees of self-similarity have beenrendered or found in nature, science and technologyfields.Addressing the important and puzzling problemthat how to determine number of neurons in thehidden layers of ANN, in this study, we introduce a345

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-8645www.jatit.orgfractal-based approach from a more macroscopicalview. Regarding the ANN structure, two conceptscalled fractal dimension of connection complexity(FDCC) and fractal dimension of the expectationcomplexity (FDEC) is proposed. Then a criterionreference for establishing ANN structure ispresented that, the FDCC might not be lower thanits (FDEC), and when FDCC is equal orapproximate to FDEC, the ANN structure might bean optimal one.The proposed approach is examined ity of the proposed that the presentedcriterion reference based the fractal for establishingthe structures of ANNs yields satisfying results,which are agree well with the optimal solutions byemploying other different methods.To further extend and improve the proposedfractal-based approach for establishing thestructures of ANNs is still included in our furtherstudy.ACKNOWLEDGMENTThe project was supported by Scientific ResearchFund of Hunan Provincial Education Department(09C399) and research fund of Hunan University ofScience and Technology (E50811).REFRENCES:[1] LILIANA, and T. A. NAPITUPULU,“Artificial Neural Network Application inGross Domestic Product Forecasting anIndonesia Case”, Journal of Theoretical andApplied Information Technology, Vol. 45. No.2, 2012, pp.410-415.[2] X.F. Li, and J. P. Xu, “The Improvement of BPArtificial Neural Network Algorithm and ItsApplication”, International Conference on EBusiness and E-Government (ICEE), May 0709, 2010, pp.2568-2571.[3] W. J. Shi , X. Z. Wang , D.Q. Zhang, F. Wang,and M. Y. Ma, “A Novel FOCAL Techniquebased on BP-ANN ”, International Journal forLight and Electron Optics,Vol. 117, No. 4,2006, pp.145-150.[4] X. MA, “Interest Degree Analysis Based onBrowsing Behaviours”, Journal of Theoreticaland Applied Information Technology, Vol. 45.No. 2, 2012, pp. 587-592.E-ISSN: 1817-3195[5] L. LI, Z.-W. LIU, X.-Y. WANG, and J.-P. XU,“Remote Monitoring and Intelligent AnalysisPlatform for Water Quality in Lake tion Technology, Vol. 47. No. 2, 2013,pp 594-597.[6] L. GAO, Y.H. ZHANG, M. ZHANG, L.M.SHAO, and J.X. XIE, “A Multi-step PredictionModel Based on Interpolation and AdaptiveTime Delay Neural Network for Time n Technology, Vol. 47. No. 2, 2013,pp. 870 - 874.[7] L. ZHAO, and Y. X. MAO, “FlawIdentification of Metal Material in EddyCurrent Testing using Neural NetworkOptimized by Particle Swarm rmation Technology, Vol. 47. No. 1, 2013,pp. 261-265.[8] M. ETTAOUIL, M. LAZAAR, and Y.GHANOU, “Architecture Optimization Modelfor the Multilayer Perceptron and ation Technology, Vol. 47. No. 1, 2013,pp. 064 – 072.[9] H.-C.YUAN, “A Novel Method for Estimatingthe Number of Hidden Neurons of theFeedforward Neural Networks”, Journal ofChinese Computer Systems, Vol.24, No.4,2003, pp.658-660.[10] S.-L. PANG, “Study on Miltilayer PerceptronCredit Scoring Model”, Acta scientiarumnaturalium, Universitatis Sunyatseni, Vol.42,No.4, 2003, pp.119-122.[11] O.-G. Liu, “Research on A Structure of MultiLayer Forward Artificial Neural Network”,Journal of Natural Science of Hunan NormalUniversity of China, Vol.27, No.1, 2004,pp.27-30.[12] R.-Y. CUI, “A Hidden Layer TrainingAlgorithm for Three-Layered FeedforwardNeural Networks Based on Energy SpaceApproaching Strategy”, Jisuanji Yanjiu yuFazhan/Computer Research and Development,Vol.40, No.7, 2003, pp.908-912.[13] X.-M. LI, “A New Method to DetermineHidden Note Number in Neural Network”,Journal of Jishou University of China (NaturalScience Edition), Vol.23, No.1, 2002, pp.90-91.[14] H-X. ZHOU, “An improved algorithm onhidden nodes in multi-layer feed-forwardneural networks”, Journal of Zhejiang NormalUniversity, Vol.25, No.3, 2003, pp.269-271.346

Journal of Theoretical and Applied Information Technology10th March 2013. Vol. 49 No.1 2005 - 2013 JATIT & LLS. All rights reserved.ISSN: 1992-8645www.jatit.org[15] D. R. Liu, “A new learning algorithm forfeedforward neural networks”, IntelligentControl, Sept, 2001, pp.39-44.[16] C. Xiang, S. Q. Ding and T. H. Lee,“Geometrical interpretation and architectureselection of MLP”, IEEE Trans. on NeuralNetworks, Vol.16, No.1, 2005, pp.84-96.[17] E. J. Teoh, K. C. Tan and C. Xiang,“Estimating the number of hidden neurons in afeedforward network using the singular vectordecomposition”, IEEE Transactions on NeuralNetworks, Vol.17, No.6, 2006, pp.1623-1629.[18] J. Heaton, Introduction to Neural Networkswith Java, Chesterfield: Heaton Research Inc.,2005, pp.125-154.[19] R. Sikora, T. Chady, P. Baniukiewicz, M.Caryk, and B. Piekarczyk, “The Choice ofOptimal Structure of Artificial Neural NetworkClassifier Intended for Classification ofWelding Flaws”, AIP Conference Proceedings,February 22, 2010, pp. 631-638.[20] J. Dobes, L. Posisil, and V. Panko, “Selectingan optimal structure of artificial neuralnetworks for characterizing RF mposium on Circuits and Systems, 1-4 Aug,2010, pp.1206-1209.[21] B. Mandelbrot, “How Long Is the Coast ofBritain?StatisticalSelf-SimilarityandFractional Dimension”, Science, Vol.156,No.3775, 1967, pp. 636-638.[22] R. Hecht-Nielsen, “Kolmogorov's mappingneural network existence theorem”,InProceedingsofIEEEFirstAnnualIntern

The artificial neural network (ANN) is a widely used mathematical model composed of interconnected simple artificial neurons, which has been applied in a variety of applications. However, how to determine . ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 343 A fractal can be view as a mathematical set, .

Related Documents:

ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 155 In this work, the system identification methods Neural Network time series: Neural Network based on Nonlinear Auto-Regressive External (Exogenous) Input (NARX) and Neural Network based on Nonlinear Auto-Regressive (NAR) used

ISSN: 1992-8645 1817-3195www.jatit.org E-ISSN: 202 NEURAL NETWORK MODEL OF ARTIFICIAL INTELLIGENCE FOR HANDWRITING RECOGNITION KULIK S.D.1 1The National Nuclear Research University MEPHI (Moscow Engineering Physics Institute) E-mail: 1sedmik@mail.ru ABSTRACT The problem of handwritten symbols recognition has been investigated in the paper. .

A artificial neural network, or supply feed forward neural network, is an interconnected group . 1992‐8645 www.jatit.org .

ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 360 identification method to forecast the dynamic model . Neural Network is one of artificial methods which can be used for the nonlinear dynamic system identification. It is consists of a number of neurons arranged in numerous layers.

issn: 1992-8645 www.jatit.org e-issn: 1817-3195 93 recent methods and techniques in video watermarking and their applicability to the next generation video codec 1 m.f.l. abdullah, 2ali a. elrowayati, 3azizah abd manaf, 4zakaria s. zubi

issn: 1992-8645 www.jatit.org e-issn: 1817-3195 5977 artificial neural network based unified power quality conditioner for power quality improvements of doubly fed induction generator 1kaoutar rabyi, 2hassane mahmoudi

issn: 1992-8645 www.jatit.org e-issn: 1817-3195 782 eeg signal identification based on root mean square and average power spectrum by using backpropagation hindarto 1, moh. hariadi 2, mauridhi hery purnomo 3

IGCSE – Accounting 0452 9 reputation of the firm which equal the difference between the net assets and selling price of the firm. 16. Direct expense of manufacturing There are any expenses which a manufacturer can directly link with the product begin manufactured 17. Appropriation account That account which shows how the profit for the year has been used 18. Collection period for trade .