Artificial Neural Networks - Uni Ulm

3y ago
34 Views
2 Downloads
575.35 KB
39 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Maleah Dent
Transcription

Statistical DataData MiningMiningStatisticalArtificial NeuralNeural NetworksNetworksArtificialProfessor Dr.Dr. GholamrezaGholamreza NakhaeizadehNakhaeizadehProfessor

fofNeuralNeuralNetworksNetworks RiseInputfunctionfunctionofofneuronneuron InputActivationfunctionfunctionofofneuronneuron on OutputFeed-Forwardnetworksnetworks Feed-ForwardFeedbacknetworksnetworks FeedbackLearningprocessprocess LearningCodingandanddecodingdecodingmethodsmethods CodingPerceptron PerceptronBackpropagation NNANN Weakness2

Literatur used (1)Principles of Data MiningDavid J. Hand, Heikki Mannila,Padhraic SmythPang-Ning Tan,Michael Steinbach,Vipin KumarJiawei Han andMicheline Kamber3

Literature Used / pdfhttp://en.wikipedia.org/wiki/Feedforward neural networkhttp://www.doc.ic.ac.uk/ nd/surprise -code.com/vb/scripts/ShowCode.asp?lngWId 5&txtCodeId 378http://download-uk.oracle.com/docs/html/B13915 02/i olap docs/html/B13915 02/i rel home.htmhttp://www.doc.gold.ac.uk/ u/ steve/pdcn.pdfwww.kdnuggets.comThe Data Warehouse Toolkit by Ralph Kimball (John Wiley and Sons, 1996)Building the Data Warehouse by William Inmon (John Wiley and Sons, 1996)Studenmund, A. H. (2006). Using Econometrics, A practical Guide. Pearson International Edition4

Artificial Neural NetworksData Mining AlgorithmsMachine LearningStatistics Rule Based Induction Discriminant Analysis Decision Trees Cluster Analysis Neural Networks Regression Analysis Conceptional clustering . Logistic RegressionDatabase Technology Association Rules .Analysis .5

Artificial Neural NetworksArtificial Neural Networks lbiological anan processinformationinformation exchangeshaslearninglearningcapabilitycapability anges inglearningphasephaseitsNeuronNeuron6

Artificial Neural NetworksBiologicalNeuronNeuronBiologicalNeuron. (Diagram by Hans & Cassidy. Courtesy of Gale Id genh 03 00566&id genh 03 img03087

Artificial Neural NetworksBiologicalNeuronNeuronBiologicalSource. eat brainartificial 8

Artificial Neural nglesingle rsresearcherslooseloosetheirtheirinterestinterest onsperceptronsdon’tdon’thavehave ilitytotoperformperform discoveredinin1980s1980sbyby vidParkerParkerDavid9

Artificial Neural jw1jinputO2jw2jNeuron jw3jO3jThe input value is multiplied by the weight before entering the neuronthese weighted inputs are then added together and generate the :m Oij wiji 1inputfunctionOi : Input of the neuron i from the perviouslayer10

Artificial Neural (active,notnotactive)active) tionstatesstatesrangerangebetweenbetween ninactivation activationactivation UpperExample of usual activation functionsSigmoid (logistic) function:Oj 1-aZjOutputofofthetheunituniti iofofthetheOutputperviouslayerlayerperviousmwith Zj 1 eOj isisthetheoutputoutputofofthetheunitunitj sisselected Oij wij Ɵjwhere Ɵj is the biasi 1Net input11

Artificial Neural tput1-aZj1 eO1jinputO2jw1jw2j Neuron jOmwmj ƟjfOutputnwith Zj Oij wi j Ɵji 112

Artificial Neural tivationWhy layerANNANNand13

Artificial Neural )(FNN)Feed-ForwardFNN eptrons)perceptrons)areare widelywidelyusedusedmodelsmodels ped sininonlyonlyoneonedirectiondirection iouslevelslevels psininthethenetworknetwork there14

Artificial Neural fof inthanthanFFNFFNFBN15

Artificial Neural NetworksArtificial Neural NetworksLearning ProcessSupervised LearningIncome 2000CarGenderCredit Rating1noyesFbad2no statenoFbad3no stateyesMgood4noyesMbad5yesyesMgood6yesyesFgood7no stateyesFgood8yesnoFgood9no statenoMbad10nonoFbad16

Artificial Neural ome 2000nono stateno statenoyesyesno stateyesno n:input,activationactivationandand esireddesiredoutputoutputcorrect17

Artificial Neural t:TrainingRipley (1996), eAccuracy18

Artificial Neural leLearningHow should we calculate the weight changing ?w1w2w3w4w5w1w2w3w4w5new weight old weight weight changew(t 1) w(t) w(t)Learning edIn19

Artificial Neural NetworksLearningProcessProcess LearningArtificialNeuralNeuralNetworksNetworks LearningLearningRuleRuleArtificialFor each training Tuple:The weights of the whole networks are modified in a manner to minimize the meansquared error (difference between the predicted and observed target value)n2Min: E ½ ( yi – yi )i t descent utsandandhiddenhiddenneuronsneuronsoutputsWjWj – ß * E (W) / Wjß: Learning Rate between 0 and 120

Artificial Neural NetworksArtificial Neural NetworksPerceptronLearning in PerceptronDisadvantagesofof ma )(slow) SmallsmallBiggradientgradient Bigbigchangechange (cavort(cavort) )big21Source: http://www.teco.edu/ 000000000

Artificial Neural NetworksArtificial Neural NetworksLearning ProcessLearning MethodsCategorizationofof UpdatingUpdatingtheweightsweightsthe22

Artificial Neural alNetworksNetworks allerrorerror imesometimenecessarynecessarytoto Tostoptrainingtrainingbybyusingusing etheANNANNshouldshouldbebe opStopTraining23

Artificial Neural onsneurons euronsneurons alsintervalscan[0,1]oror[1,[1,-1]-1][0,1]24

Artificial Neural dingCoding: valuedororcategoricalcategoricaldatadatatoto :viceviceversaversa ngCoding ut25

Artificial Neural nctionfunctionCodingoriginalvalue10 25 30 50[ 0 , 1]oldXn: original value (n 1, 2, , N)new transf.valuenewXnnewXnX max :oldX min0.375 A* Xnew: new, transferred value (n 1, 2, , N)old0A maximal original valueX maxoldX max: minimal original valueB newX min0.50oldn-- A*1 BnewX minoldX minoldX min26

Artificial Neural ArtificialNeuralNeuralNetworksNetworks uesofinputinputneuronsneuronsof27

Artificial Neural ficialNetworktypetype PerceptronPerceptronNetworkPerceptron:A simple ANN architecture consists only of input and output nodes (no hidden layer):x1w1w2x2x3w3Input layery OutputComputing the output y in Perceptron:1,ifw1 x1 w2 x2 w3 x3 - t 0- 1,ifw1 x1 w2 x2 w3 x3 - t 0ŷ Generally:ŷ sign ( w1 x1 w2 x2 wj xj wm xm – t )t : Bias factor28

Artificial Neural NetworksPerceptron NeuralNeuralNetworksNetworks PerceptronArtificial .xx1j1j . .xx1111 xx1212 .1mxx1myy11(k 1) . xxijij xximim yyi ixxi1i1 xxi2i2 . . . .xxnjnj . . xxnmnmxxn1n1 xxn2n2 .yynnwj(k 1)wj(k)The above Learning Ruleis based on GradientDescent method by minimizingnMin: E 2½ ( yi – yi )i 1WjjXij (k)wj ß ( yi -(k)yi )ˆ: new weight in iteration k 1 of neuron jwj: old weight in iteration k of neuron jß: learning rateyi: observed output of the tuple i(k)xijˆyi: calculated output of the tuple i in iteration kxij: value of attribute j of the tuple i29

Artificial Neural NetworksPerceptron NeuralNeuralNetworksNetworks PerceptronArtificialnMin: E ½ ( yi – yi )for tuple iMin: E 2½ ( yi – yi )i 1m 2yi wij Xij yi / wij Xijj 1 E / wj - (yi - yi) * yi /Wj – ß *Wj wij - (yi - yi) Xij E (W) / Wj(1)(2)(1) and (2) lead to(k 1)wj (k)wj ß ( yi -(k)yi )ˆxij30

Artificial Neural NetworksPerceptron NeuralNeuralNetworksNetworks PerceptronArtificialThe relationw1 x1 w2 x2 wj xj wm xmis linear in w and xFor linearly separable classification problems, the learning algorithm(k 1)wj (k)wj ß ( yi -(k)yi )ˆxijconverges if the learning rate ß is sufficiently separable,separable, tnotIfIftheconverge. .alternative.such31

Artificial Neural NetworksArtificial Neural NetworksBackpropagation on((backpropbackprop, iallyininthethepraxispraxis. eparableseparableclasses,but,BPcanclasses, but, BP canx1x2w1w2x3w3 OutputInput tthetheobservedobservedoutputsoutputsofof wardsBackpropagation32

Artificial Neural NetworksArtificial Neural NetworksBackpropagation AlgorithmsLearning rval[-1,1]or[0.5,0.5]the interval [ -1 , 1 ] or [ - 0.5 , 0.5 tInputofofthetheneuronneuronj jisisnetpZj (1)Xij wij Ɵji 1Calculation of the output function: Using a logistic function we have gotOj 1-aZj(2)1 eThis function is differentiable and nonlinearRepeat (1) and (2) until we the output layer is reached and included33

Artificial Neural NetworksArtificial Neural NetworksBackpropagation AlgorithmsBackpropagate the errorLearning StepsδjE: Error of the unit j E// ZZj j EE// wwijij E/ wwijij XXijij ** EE// ZZj j XXijijδδj j** ZZj j/ A- Neuron j belongs to the Output LayerContribution to E by j : E ½δj / Zj2(Tj – Oj )E - (Tj – Oj ) Oj / ZjTj: target Value of the neuron jand for a 1δδj j --(T(Tj j––OOj )j ) (1(1-OOj)j)OOj jWijijTherefore : regarding the Gradient Descent Learning Rule : WWijij ßßδδj jXXijijW34

Artificial Neural NetworksArtificial Neural NetworksBackpropagation AlgorithmsLearning StepsBackpropagate the errorCase B- Neuron j belongsto a Hidden LayerOjHidden layerneuron jWj1N1δ1N2δ2N3δ3Wj2Wj3(1-OOj j)) δδkkWWjkjkEEj j δδj j OOj j(1kOjδkWjk: Oput of the neuron j: Error of the neuron k in the next layer: weight of the connection from theneuron j to a neuron k in the next layerGradient Descent Learning Rule :WijijWBiasupdatingupdatingforforNeuronNeuronJJ:: ƟjƟj ƟjƟj ßßEjEjBiasWijij ßßδδj jXXijijW35

Artificial Neural NetworksBackpropagation AlgorithmsArtificial Neural NetworksExample(source: Han et al (2006) 0.2-0.3-0.2-0.4Net Input, ZjƟ50.2Ɵ60.1Output, Oj0.740.2 0 -0.5 -0.4 -0.71/ ( 1 e ) 0.3325-0.3 0 0.2 0.2 0.11/ ( 1 e-0.1) 0.5250.1056(-0.3) (0.332) – (02) (0.525) 0.1 -0.1051/ ( 1 e) 0.47436

9 Artificial Neural Networks Rise and fall of Neural NetworksRise and fall of Neural Networks In the 70’s and 80's, it was shown that multilevel perceptrons don’t have These shortcomings Paul J. Werbos invented 1974 the back-propagation having the ability to perform classification tasks beyond simple Perceptrons

Related Documents:

Blade Runner Classic Uncommon flooring - Common standards Solerunner Uni Solerunner Bladerunner Solerunner Uni Uni ICE Uni SKY Uni SAND Uni EARTH Uni NIGHT Uni POOL Uni MOSS Uni PINE Sky Sky UNI Sky STONE ENDURANCE VISION SPLASH Ice Ice UNI Ice STONE Ice ENDURANCE Ice SPL

The ULM Magazine is published semiannually for members of the ULM Alumni Association and friends by the University of Louisiana Monroe and the ULM Alumni Association. Letters and comments should be sent to: ULM Magazine 700 University Avenue Monroe, LA 71209-2500 Phone: (318) 342-5440 Fax: (318) 342-5446 E-mail: ulmmagazine@ulm.edu

A growing success of Artificial Neural Networks in the research field of Autonomous Driving, such as the ALVINN (Autonomous Land Vehicle in a Neural . From CMU, the ALVINN [6] (autonomous land vehicle in a neural . fluidity of neural networks permits 3.2.a portion of the neural network to be transplanted through Transfer Learning [12], and .

neural networks using genetic algorithms" has explained that multilayered feedforward neural networks posses a number of properties which make them particularly suited to complex pattern classification problem. Along with they also explained the concept of genetics and neural networks. (D. Arjona, 1996) in "Hybrid artificial neural

Artificial Neural Networks Develop abstractionof function of actual neurons Simulate large, massively parallel artificial neural networks on conventional computers Some have tried to build the hardware too Try to approximate human learning, robustness to noise, robustness to damage, etc. Early Uses of neural networks

Fluorescence Resonance Energy Transfer Networks Adam Muschielok† and Jens Michaelis*,†,‡ †Chemistry Department, Ludwig-Maximilians-University Munich, Butenandtstrasse 11, 81377 Munich, Germany ‡Physics Department, Institute for Experimental Biophysics, Ulm University, Albert-Einstein-Allee 11, 89081 Ulm, Germany bS Supporting Information

Neural networks use machine learning based on the concept of self-adjustment of internal control parameters. An artificial neural network is a non-parametric attempt to model the human brain. Artificial neural networks are pliable mathematical structures that are capable of identifying complex non-linear relationships among input and output data

wesley long hospital . womack army medical center . 3 overview of goals the student should spend the majority of their time in the inpatient pharmacy learning the roles and responsibilities of every staff member. this is a medication use process and systems based experience, not a clinical experience. students should be encouraged to work with pharmacy technicians, including those .