Neural Network Toolbox 7

2y ago
39 Views
4 Downloads
8.02 MB
951 Pages
Last View : 2m ago
Last Download : 2m ago
Upload by : Nixon Dill
Transcription

Neural Network Toolbox 7User’s GuideMark Hudson BealeMartin T. HaganHoward B. Demuth

How to Contact MathWorksWebNewsgroupwww.mathworks.com/contact TS.html Technical service@mathworks.cominfo@mathworks.comProduct enhancement suggestionsBug reportsDocumentation error reportsOrder status, license renewals, passcodesSales, pricing, and general information508-647-7000 (Phone)508-647-7001 (Fax)The MathWorks, Inc.3 Apple Hill DriveNatick, MA 01760-2098For contact information about worldwide offices, see the MathWorks Web site.Neural Network Toolbox User’s Guide COPYRIGHT 1992–2010 by The MathWorks, Inc.The software described in this document is furnished under a license agreement. The software may be usedor copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc.FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by,for, or through the federal government of the United States. By accepting delivery of the Program orDocumentation, the government hereby agrees that this software or documentation qualifies as commercialcomputer software or commercial computer software documentation as such terms are used or defined inFAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of thisAgreement and only those rights specified in this Agreement, shall pertain to and govern the use,modification, reproduction, release, performance, display, and disclosure of the Program and Documentationby the federal government (or other entity acquiring for or through the federal government) and shallsupersede any conflicting contractual terms or conditions. If this License fails to meet the government'sneeds or is inconsistent in any respect with federal procurement law, the government agrees to return theProgram and Documentation, unused, to The MathWorks, Inc.TrademarksMATLAB and Simulink are registered trademarks of The MathWorks, Inc. Seewww.mathworks.com/trademarks for a list of additional trademarks. Other product or brandnames may be trademarks or registered trademarks of their respective holders.PatentsMathWorks products are protected by one or more U.S. patents. Please seewww.mathworks.com/patents for more information.

Revision HistoryJune 1992April 1993January 1997July 1997January 1998September 2000June 2001July 2002January 2003June 2004October 2004October 2004March 2005March 2006September 2006March 2007September 2007March 2008October 2008March 2009September 2009March 2010September 2010First printingSecond printingThird printingFourth printingFifth printingSixth printingSeventh printingOnline onlyOnline onlyOnline onlyOnline onlyEighth printingOnline onlyOnline onlyNinth printingOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyRevised for Version 3 (Release 11)Revised for Version 4 (Release 12)Minor revisions (Release 12.1)Minor revisions (Release 13)Minor revisions (Release 13SP1)Revised for Version 4.0.3 (Release 14)Revised for Version 4.0.4 (Release 14SP1)Revised for Version 4.0.4Revised for Version 4.0.5 (Release 14SP2)Revised for Version 5.0 (Release 2006a)Minor revisions (Release 2006b)Minor revisions (Release 2007a)Revised for Version 5.1 (Release 2007b)Revised for Version 6.0 (Release 2008a)Revised for Version 6.0.1 (Release 2008b)Revised for Version 6.0.2 (Release 2009a)Revised for Version 6.0.3 (Release 2009b)Revised for Version 6.0.3 (Release 2010a)Revised for Version 7.0 (Release 2010b)

AcknowledgmentsThe authors would like to thank the following people:Joe Hicklin of MathWorks for getting Howard into neural network researchyears ago at the University of Idaho, for encouraging Howard and Mark towrite the toolbox, for providing crucial help in getting the first toolbox Version1.0 out the door, for continuing to help with the toolbox in many ways, and forbeing such a good friend.Roy Lurie of MathWorks for his continued enthusiasm for the possibilities forNeural Network Toolbox software.Mary Ann Freeman of MathWorks for general support and for her leadership ofa great team of people we enjoy working with.Rakesh Kumar of MathWorks for cheerfully providing technical and practicalhelp, encouragement, ideas and always going the extra mile for us.Alan LaFleur of MathWorks for facilitating our documentation work.Stephen Vanreusel of MathWorks for help with testing.Dan Doherty of MathWorks for marketing support and ideas.Orlando De Jesús of Oklahoma State University for his excellent work indeveloping and programming the dynamic training algorithms described inChapter 4, “Dynamic Networks,” and in programming the neural networkcontrollers described in Chapter 5, “Control Systems.”PermissionsMartin T. Hagan, Howard B. Demuth, and Mark Hudson Beale forpermission to include various problems, demonstrations, and other materialfrom Neural Network Design, January, 1996.

Neural Network Toolbox Design BookThe developers of the Neural Network Toolbox software have written atextbook, Neural Network Design (Hagan, Demuth, and Beale, ISBN0-9717321-0-8). The book presents the theory of neural networks, discussestheir design and application, and makes considerable use of the MATLAB environment and Neural Network Toolbox software. Demonstration programsfrom the book are used in various chapters of this user’s guide. (You can findall the book demonstration programs in the Neural Network Toolbox softwareby typing nnd.)This book can be obtained from John Stovall at (303) 492-3648, or by e-mail atJohn.Stovall@colorado.edu.The Neural Network Design textbook includes: An Instructor’s Manual for those who adopt the book for a class Transparency Masters for class useIf you are teaching a class and want an Instructor’s Manual (with solutions tothe book exercises), contact John Stovall at (303) 492-3648, or by e-mail atJohn.Stovall@colorado.edu.To look at sample chapters of the book and to obtain Transparency Masters, godirectly to the Neural Network Design page athttp://hagan.okstate.edu/nnd.htmlFrom this link, you can obtain sample book chapters in PDF format and youcan download the Transparency Masters by clicking Transparency Masters(3.6MB).You can get the Transparency Masters in PowerPoint or PDF format.

ContentsGetting Started1Product Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2Using the Toolbox and Its Documentation . . . . . . . . . . . . . . 1-3Automatic Script Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4Neural Network Toolbox Applications . . . . . . . . . . . . . . . . 1-5Neural Network Design Steps . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8Fitting a Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9Defining a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9Using the Neural Network Fitting Tool . . . . . . . . . . . . . . . . . . 1-10Using Command-Line Functions . . . . . . . . . . . . . . . . . . . . . . . 1-21Recognizing Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Defining a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the Neural Network Pattern Recognition Tool . . . . . . .Using Command-Line Functions . . . . . . . . . . . . . . . . . . . . . . .1-281-281-301-41Clustering Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Defining a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the Neural Network Clustering Tool . . . . . . . . . . . . . . .Using Command-Line Functions . . . . . . . . . . . . . . . . . . . . . . .1-491-491-501-60Time Series Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Defining a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the Neural Network Time Series Tool . . . . . . . . . . . . . .Using Command-Line Functions . . . . . . . . . . . . . . . . . . . . . . .1-661-661-671-80Sample Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-89i

Network Objects, Data and Training Styles2Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simple Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron with Vector Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-42-42-52-6Network Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A Layer of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Layers of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . .Input and Output Processing Functions . . . . . . . . . . . . . . . . . .2-102-102-122-14Introduction to the Network Object . . . . . . . . . . . . . . . . . . . . 2-16Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a Static Network . . . .Simulation with Sequential Inputs in a Dynamic Network . .Simulation with Concurrent Inputs in a Dynamic Network . .2-232-232-242-26Training Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Incremental Training with adapt . . . . . . . . . . . . . . . . . . . . . . .Batch Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-292-292-312-35Multilayer Networks and BackpropagationTraining3Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2iiContents

Multilayer Neural Network Architecture . . . . . . . . . . . . . . . . 3-3Feedforward Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4Collect and Prepare the Data . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7Preprocessing and Postprocessing . . . . . . . . . . . . . . . . . . . . . . . 3-7Dividing the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10Create, Configure and Initialize the Network . . . . . . . . . . . 3-12Other Related Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13Initializing Weights (init) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13Train the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Efficiency and Memory Reduction . . . . . . . . . . . . . . . . . . . . . .Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-143-153-173-173-18Post-Training Analysis (Network Validation) . . . . . . . . . . . 3-21Improving Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24Use the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26Automatic Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28Dynamic Networks4Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2Examples of Dynamic Networks . . . . . . . . . . . . . . . . . . . . . . . . . 4-3Applications of Dynamic Networks . . . . . . . . . . . . . . . . . . . . . . . 4-8Dynamic Network Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9Dynamic Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10Focused Time-Delay Neural Network (timedelaynet) . . . . 4-12iii

Preparing Data (preparets) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17Distributed Time-Delay Neural Network (newdtdnn) . . . . 4-18NARX Network (narxnet, closeloop) . . . . . . . . . . . . . . . . . . . 4-21Layer-Recurrent Network (layerrecurrentnet) . . . . . . . . . . 4-27Training Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29Multiple Sequences, Time Series Utilities andError Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Time Series Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Error Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-354-354-354-38Control Systems5Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2NN Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the NN Predictive Controller Block . . . . . . . . . . . . . . . . .5-45-45-55-6NARMA-L2 (Feedback Linearization) Control . . . . . . . . . .Identification of the NARMA-L2 Model . . . . . . . . . . . . . . . . . .NARMA-L2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the NARMA-L2 Controller Block . . . . . . . . . . . . . . . . . .5-145-145-165-18Model Reference Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23Using the Model Reference Controller Block . . . . . . . . . . . . . . 5-25ivContents

Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31Importing and Exporting Networks . . . . . . . . . . . . . . . . . . . . . 5-31Importing and Exporting Training Data . . . . . . . . . . . . . . . . . 5-35Radial Basis Networks6Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2Important Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . 6-2Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Exact Design (newrbe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .More Efficient Design (newrb) . . . . . . . . . . . . . . . . . . . . . . . . . .Demonstrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-36-36-46-56-76-8Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 6-9Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9Design (newpnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-10Generalized Regression Networks . . . . . . . . . . . . . . . . . . . . . 6-12Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12Design (newgrnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14Self-Organizing and LearningVector Quantization Nets7Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2Important Self-Organizing and LVQ Functions . . . . . . . . . . . . . 7-2Competitive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3v

Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Competitive Neural Network (newc) . . . . . . . . . . . .Kohonen Learning Rule (learnk) . . . . . . . . . . . . . . . . . . . . . . . . .Bias Learning Rule (learncon) . . . . . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Graphical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-37-47-57-57-67-7Self-Organizing Feature Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9Topologies (gridtop, hextop, randtop) . . . . . . . . . . . . . . . . . . . . 7-10Distance Functions (dist, linkdist, mandist, boxdist) . . . . . . . 7-14Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-17Creating a Self-Organizing MAP Neural Network (newsom) . 7-18Training (learnsomb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-19Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22Learning Vector Quantization Networks . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating an LVQ Network (newlvq) . . . . . . . . . . . . . . . . . . . . .LVQ1 Learning Rule (learnlv1) . . . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Supplemental LVQ2.1 Learning Rule (learnlv2) . . . . . . . . . . .7-357-357-367-397-407-42Adaptive Filters and Adaptive Training8Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2Important Adaptive Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2Linear Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3Adaptive Linear Network Architecture . . . . . . . . . . . . . . . . . 8-4Single ADALINE (newlin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8viContents

Adaptive Filtering (adapt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9Tapped Delay Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9Adaptive Filter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10Prediction Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-13Noise Cancellation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14Multiple Neuron Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . . 8-16Advanced Topics9Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2Custom Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3Network Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12Additional Toolbox Functions . . . . . . . . . . . . . . . . . . . . . . . . . 9-15Speed and Memory Comparison for TrainingMultilayer Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-32Improving Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . .Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Index Data Division (divideind) . . . . . . . . . . . . . . . . . . . . . . . .Random Data Division (dividerand) . . . . . . . . . . . . . . . . . . . . .Block Data Division (divideblock) . . . . . . . . . . . . . . . . . . . . . . .Interleaved Data Division (divideint) . . . . . . . . . . . . . . . . . . . .Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary and Discussion of Early Stoppingand Regularization . . . . .

from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

Related Documents:

Neural Network Based System Identification Toolbox User’s Guide 1-1 1 Tutorial The present toolbox: “Neural Network Based System Identification Toolbox”, contains a large number of functions for training and evaluation of multilayer perceptron type neural networks. The

Model-Based Calibration Toolbox 13, 21, 23, 24, 27 500 600 Control System Design and Analysis Control System Toolbox 200 200 System Identification Toolbox 200 200 Fuzzy Logic Toolbox 200 200 Robust Control Toolbox 4 200 200 Model Predictive Control Toolbox 4 200 23200 Aerospace Toolbox 200 200 Signal Processing and Communications

environment and Neural Network Toolbox software. Demonstration programs from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) Obtain this book from John Stovall at (303) 492-3648, or by email at John.Stovall@colorado.edu.

from the book are used in various chapters of this user s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

The toolbox is designed to work with both resting state scans and block designs. Installing the toolbox: 1. download conn.zip, unzip the file. 2. add ./conn/ directory to matlab path To start the toolbox: On the matlab prompt, type : conn (make sure your matlab path include the path to the connectivity toolbox)

and to review all NIH Toolbox measures as they relate to the needs of young children. NIH TOOLBOX APP Released in 2015, the NIH Toolbox iPad app takes advantage of portable technology and advanced software to provide users with a exible and easy-to-use NIH Toolbox test administration and data collection system. App

neural networks and substantial trials of experiments to design e ective neural network structures. Thus we believe that the design of neural network structure needs a uni ed guidance. This paper serves as a preliminary trial towards this goal. 1.1. Related Work There has been extensive work on the neural network structure design. Generic algorithm (Scha er et al.,1992;Lam et al.,2003) based .

be able to interpret the solutions you get, and this is one role of analysis. By the way, the series method used above does work for many equations -see later courses! The aims of analysis can be broadly summarised as follows. (i) To justify the methods of calculus, and to determine whether some procedure for solving a problem is valid, and to interpret the solutions it gives.-3 - (ii) To .