Artificial Neural Networks Lecture Notes

2y ago
98 Views
5 Downloads
1.88 MB
19 Pages
Last View : 16d ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

Artificial Neural Networks Part 11Stephen Lucci, PhDArtificial Neural NetworksLecture NotesStephen Lucci, PhDPart 11About this file:llllThis is the printer-friendly version of the file "lecture11.htm". In case the page is not properlydisplayed, use IE 5 or higher.Since this is an offline page, each file "lecturenn.htm" (for instance "lecture11.htm") isaccompanied with a "imgnn" folder (for instance "img11") containing the images which makepart of the notes. So, to see the images, Each html file must be kept in the same directory(folder) as its corresponding "imgnn" folder.If you have trouble reading the contents of this file, or in case of transcription errors, s:Background image is tomy/tutorial1/tutorial1.html (edited) at theUniversity of Sydney Neuroanatomy web page. Mathematics symbols images are frommetamath.org's GIF images for Math Symbols web page. Some images are scans from R.Rojas, Neural Networks (Springer-Verlag, 1996), as well as from other books to be credited in afuture revision of this file. Some image credits may be given where noted, the remainder arenative to this file.ContentsllAssociative Memory Networks¡ A Taxonomy of Associative Memories¡ An Example of Associative Recall¡ Hebbian Learning¡ Hebb Rule for Pattern Association¡ Character Recognition Example¡ Autoassociative Nets¡ Application and Examples¡ Storage CapacityGenetic Algorithms¡ GA's Vs. Other Stochastic Methods¡ The Metropolis Algorithm¡ Bit-Based Descent Methods¡ Genetic Algorithms¡ Neural Nets and GAPage 1 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDAssociative Memory NetworkslRemembering something: Associating an idea or thought with a sensory cue.lHuman memory connects items (ideas, sensations, &c.) that are similar, thatare contrary, that occur in close proximity, or that occur in close succsession- AristotlelAn input stimulus which is similar to the stimulus for the association will invokethe associated response pattern.¡¡¡A woman's perfume on an elevator.A song on the radio.An old photograph.lAn Associative Memory Net may serve as a highly simplified model of humanmemory.lThese associative memory units should not be confused with ContentAddressable Memory Units.A Taxonomy of Associative MemoriesThe superscripts of x and y are all ilHeteroassociative networkMaps n input vectors 1, 2, ., n, in n -dimensional spaceto m output vectors 1, 2, ., m, in m-dimensional space,iPage 2 of 19i

Artificial Neural Networks Part 11if li 2 thenStephen Lucci, PhDiAutoassociative NetworkA type of heteroassociative network.Each vector is associated with itself; i.e.,i i , i 1, ., n.Features correction of noisy input vectors.lPattern Recognition NetworkA type of heteroassociative network.Each vector i is associated with the scalar i.[illegible - remainder cut-off in photocopy]An Example of Associative RecallTo the left is a binarized version of the letter"T".The middle picture is the same "T" but withthe bottom half replaced by noise. Pixelshave been assigned a value 1 with probability 0.5Upper half: The cueBottom half: has to be recalled from memory.The pattern on the right is obtained from the original "T" by adding 20% noise. Eachpixel is inverted with probability 0.2.The whole memory is available, but in an imperfectly recalled form ("hazy" orinaccurate memory of some scene.)(Compare/contrast the following with database searches)In each case, when part of the pattern of data is presented in the form of a sensorycue, the rest of the pattern (memory) is associated with it.Alternatively, we may be offered an imperfect version of the.[illegible - remainder cut-off in photocopy]Hebbian LearningDonald Hebb - psychologist, 1949.Two neurons which are simultaneously active should develop a degree of interactionhigher than those neurons whose activities are uncorrelated.Page 3 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDInput x iOutput yjweight updatewij xi yjHebb Rule for Pattern AssociationIt can be used with patterns that are represented as either binary or bipolar vectors.llTraining Vector Pairs :Testing Input Vector (which may or may not be the same as one of thetraining input vectors.)In this simple form of Hebbian Learning, one generally employs outer productcalculations instead.Page 4 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDArchitecture of a Heteroassociative Neural NetA simple example (from Fausett's text)Heteroassociative network.Input vectors - 4 componentsOutput vectors - 2 componentsPage 5 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDThe input vectors are not mutually orthogonal (i.e., the dot product 0,) - in whichcase the response will include a portion of each of their target values - cross-talk .Note: target values are chosen to be related to the input vectors in a simple manner.The cross-talk between the first and second input vectors does not pose anydifficulties (since these target values.[Illegible - cut off in photocopy]The training is accomplished by the Hebb rule:lwij(new) wij(old) s itjie, wij s itj , 1.Page 6 of 19

Artificial Neural Networks Part 11testing - cont'd:Page 7 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11We can employ vector-matrix rotation to illustrate the testing process.Page 8 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11lStephen Lucci, PhDA bipolar representation would be preferable. More robust in the presence ofPage 9 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDnoise.lThe weight matrix obtained from the previous examples would be:with two "mistakes".Trouble remains:ie, (-1, 1, 1, -1) w (0,0)l(0,0).However, the net can respond correctly when given an input vector with twocomponents missing .e.g., X (0, 1, 0, -1) formed from S (1, 1,-1, -1) with the first and thirdcomponents missing rather than wrong.(0, 1, 0, -1) w (6,-6) (1,1) which is the .[illegible - cut off in photocopy]Character Recognition Example(Example 3.9) A heteroassociative net for associating letters from differentfontsA heteroassociative neural net was trained using the Hebb rule (outer products) toassociate three vector pairs. The x vectors have 63 components, the y vectors 15.The vectors represent patterns. The patternis converted to a vector representation that is suitable for processing as follows: The#'s are replaced by 1's and the dots by -1's, reading across each row (starting with thetop row). The pattern shown becomes the vector(-1,1,-1 1,-1,1 1,1,11,-1,1 1,-1,1).The extra spaces between the vector components, which separate the different rowsof the original pattern for ease of reading, are not necessary for the network.The figure below shows the vector pairs in their original two-dimensional form.Page 10 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDAfter training, the net was used with input patterns that were noisy versions of thetraining input patterns. The results are shown in figures 3.4 and 3.5 (below). Thenoise took the form of turning pixels "on" that should have been "off" and vice versa.These are denoted as follows:@ Pixel is now "on", but this is a mistake (noise).O Pixel is now "off", but this is a mistake (noise).Figure 3.5 (below) shows that the neural net can recognize the small letters that arestored in it, even when given input patterns representing the large training patternswith 30% noise.Page 11 of 19

Artificial Neural Networks Part 11Stephen Lucci, PhDAutoassociative NetslFor an autoassociative net, the training input and target output vectors areidentical.lThe process of training is often called storing the vectors, which may be binaryor bipolar.lA stored vector can be retrieved from distorted or partial (noisy) input if the inputis sufficiently close to it.lThe performance of the net is judged by its ability to reproduce a stored patternfrom noisy input; performance is generally better for bipolar vectors than forbinary vectors.Architecture of an Autoassociative neural netIt is common for weights on the diagonal (those which connect an input patterncomponent to the corresponding component in the output pattern) to be set to zero.Page 12 of 19

Artificial Neural Networks Part 11Application and examples of Autoassociative NetsPage 13 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Page 14 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Page 15 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Storage CapacityPage 16 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Page 17 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Page 18 of 19Stephen Lucci, PhD

Artificial Neural Networks Part 11Page 19 of 19Stephen Lucci, PhD

Part 11 About this file: l This is the printer-friendly version of the file "lecture11.htm".In case the page is not properly displayed, use IE 5 or higher. l Since this is an offline page, each file "lecturenn.htm" (for instance "lecture11.htm") is accompanied with a "imgnn" folder (for instance "img11") co

Related Documents:

Artificial Neural Networks Lecture Notes - Part 1 Stephen Lucci, PhD Artificial Neural Networks Lecture Notes Stephen Lucci, PhD . They conduct signals t the cell body. Axon Hillock Ex tends from cell body - initial por ion o the axon. .

Introduction of Chemical Reaction Engineering Introduction about Chemical Engineering 0:31:15 0:31:09. Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Lecture 27 Lecture 28 Lecture

Lecture 5-6: Artificial Neural Networks (THs) Lecture 7-8: Instance Based Learning (M. Pantic) . (Notes) Lecture 17-18: Inductive Logic Programming (Notes) Maja Pantic Machine Learning (course 395) Lecture 1-2: Concept Learning Lecture 3-4: Decision Trees & CBC Intro Lecture 5-6: Artificial Neural Networks .

Artificial Neural Networks Introduction to Data Mining , 2nd Edition by Tan, Steinbach, Karpatne, Kumar 2/22/2021 Introduction to Data Mining, 2nd Edition 2 Artificial Neural Networks (ANN) Basic Idea: A complex non-linear function can be learned as a composition of simple proces

A growing success of Artificial Neural Networks in the research field of Autonomous Driving, such as the ALVINN (Autonomous Land Vehicle in a Neural . From CMU, the ALVINN [6] (autonomous land vehicle in a neural . fluidity of neural networks permits 3.2.a portion of the neural network to be transplanted through Transfer Learning [12], and .

Week 3 Lecture Notes page 1 of 1 Artificial Neural Networks (Ref: Negnevitsky, M. "Artificial Intelligence, Chapter 6) BPNN in Practice . B219 Intelligent Systems Semester 1, 2003 Week 3 Lecture Notes page 2 of 2 The Hopfield Network § In this network, it was designed on analogy of brain's memory, which is work by association. .

neural networks using genetic algorithms" has explained that multilayered feedforward neural networks posses a number of properties which make them particularly suited to complex pattern classification problem. Along with they also explained the concept of genetics and neural networks. (D. Arjona, 1996) in "Hybrid artificial neural

Artificial Neural Networks Develop abstractionof function of actual neurons Simulate large, massively parallel artificial neural networks on conventional computers Some have tried to build the hardware too Try to approximate human learning, robustness to noise, robustness to damage, etc. Early Uses of neural networks