Neural Network Toolbox User's Guide - UFRJ

1y ago
12 Views
3 Downloads
2.28 MB
420 Pages
Last View : 26d ago
Last Download : 2m ago
Upload by : Emanuel Batten
Transcription

Neural Network Toolbox User’s GuideR2012bMark Hudson BealeMartin T. HaganHoward B. Demuth

How to Contact MathWorksWebNewsgroupwww.mathworks.com/contact TS.html Technical service@mathworks.cominfo@mathworks.comProduct enhancement suggestionsBug reportsDocumentation error reportsOrder status, license renewals, passcodesSales, pricing, and general information508-647-7000 (Phone)508-647-7001 (Fax)The MathWorks, Inc.3 Apple Hill DriveNatick, MA 01760-2098For contact information about worldwide offices, see the MathWorks Web site.Neural Network Toolbox User’s Guide COPYRIGHT 1992–2012 by The MathWorks, Inc.The software described in this document is furnished under a license agreement. The software may be usedor copied only under the terms of the license agreement. No part of this manual may be photocopied orreproduced in any form without prior written consent from The MathWorks, Inc.FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentationby, for, or through the federal government of the United States. By accepting delivery of the Programor Documentation, the government hereby agrees that this software or documentation qualifies ascommercial computer software or commercial computer software documentation as such terms are usedor defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms andconditions of this Agreement and only those rights specified in this Agreement, shall pertain to and governthe use, modification, reproduction, release, performance, display, and disclosure of the Program andDocumentation by the federal government (or other entity acquiring for or through the federal government)and shall supersede any conflicting contractual terms or conditions. If this License fails to meet thegovernment’s needs or is inconsistent in any respect with federal procurement law, the government agreesto return the Program and Documentation, unused, to The MathWorks, Inc.TrademarksMATLAB and Simulink are registered trademarks of The MathWorks, Inc. Seewww.mathworks.com/trademarks for a list of additional trademarks. Other product or brandnames may be trademarks or registered trademarks of their respective holders.PatentsMathWorks products are protected by one or more U.S. patents. Please seewww.mathworks.com/patents for more information.

Revision HistoryJune 1992April 1993January 1997July 1997January 1998September 2000June 2001July 2002January 2003June 2004October 2004October 2004March 2005March 2006September 2006March 2007September 2007March 2008October 2008March 2009September 2009March 2010September 2010April 2011September 2011March 2012September 2012First printingSecond printingThird printingFourth printingFifth printingSixth printingSeventh printingOnline onlyOnline onlyOnline onlyOnline onlyEighth printingOnline onlyOnline onlyNinth printingOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyRevised for Version 3 (Release 11)Revised for Version 4 (Release 12)Minor revisions (Release 12.1)Minor revisions (Release 13)Minor revisions (Release 13SP1)Revised for Version 4.0.3 (Release 14)Revised for Version 4.0.4 (Release 14SP1)Revised for Version 4.0.4Revised for Version 4.0.5 (Release 14SP2)Revised for Version 5.0 (Release 2006a)Minor revisions (Release 2006b)Minor revisions (Release 2007a)Revised for Version 5.1 (Release 2007b)Revised for Version 6.0 (Release 2008a)Revised for Version 6.0.1 (Release 2008b)Revised for Version 6.0.2 (Release 2009a)Revised for Version 6.0.3 (Release 2009b)Revised for Version 6.0.4 (Release 2010a)Revised for Version 7.0 (Release 2010b)Revised for Version 7.0.1 (Release 2011a)Revised for Version 7.0.2 (Release 2011b)Revised for Version 7.0.3 (Release 2012a)Revised for Version 8.0 (Release 2012b)

ContentsNeural Network Toolbox Design BookNetwork Objects, Data, and Training Styles1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simple Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron with Vector Input . . . . . . . . . . . . . . . . . . . . . . . . . . .1-41-41-51-6Network Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .One Layer of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Layers of Neurons . . . . . . . . . . . . . . . . . . . . . . . . .Input and Output Processing Functions . . . . . . . . . . . . . . .1-101-101-121-14Network Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-16Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-21Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a Static Network . .Simulation with Sequential Inputs in a DynamicNetwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a DynamicNetwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-241-24Training Styles (Adapt and Train) . . . . . . . . . . . . . . . . . . .Incremental Training with adapt . . . . . . . . . . . . . . . . . . . . .Batch Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-301-301-331-361-251-27v

Multilayer Networks and BackpropagationTraining2Multilayer Networks and Backpropagation Training . .2-2Multilayer Neural Network Architecture . . . . . . . . . . . .Neuron Model (logsig, tansig, purelin) . . . . . . . . . . . . . . . . .Feedforward Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-32-32-4Collect and Prepare the Data . . . . . . . . . . . . . . . . . . . . . . .Preprocessing and Postprocessing . . . . . . . . . . . . . . . . . . . .Dividing the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-72-72-10Create, Configure, and Initialize the Network . . . . . . . .Other Related Architectures . . . . . . . . . . . . . . . . . . . . . . . . .Initializing Weights (init) . . . . . . . . . . . . . . . . . . . . . . . . . . .2-132-142-14Train the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Efficiency and Memory Reduction . . . . . . . . . . . . . . . . . . . .Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-152-162-182-182-19Post-Training Analysis (Network Validation) . . . . . . . . .Improving Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-232-26Use the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-28Automatic Code Generation . . . . . . . . . . . . . . . . . . . . . . . .2-29.2-30Limitations and CautionsviContents

Dynamic Networks3Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Examples of Dynamic Networks . . . . . . . . . . . . . . . . . . . . .Applications of Dynamic Networks . . . . . . . . . . . . . . . . . . .Dynamic Network Structures . . . . . . . . . . . . . . . . . . . . . . . .Dynamic Network Training . . . . . . . . . . . . . . . . . . . . . . . . .3-23-33-93-93-11Focused Time-Delay Neural Network (timedelaynet) . .3-13Preparing Data (preparets) . . . . . . . . . . . . . . . . . . . . . . . . .3-18Distributed Time-Delay Neural Network(distdelaynet) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-20NARX Network (narxnet, closeloop) . . . . . . . . . . . . . . . . .3-23Layer-Recurrent Network (layrecnet) . . . . . . . . . . . . . . .3-29Training Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . .3-31Multiple Sequences, Time-Series Utilities, and ErrorWeighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Time-Series Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Error Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-373-373-383-40Control Systems4Introduction to System Control . . . . . . . . . . . . . . . . . . . . .4-2NN Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . .System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-44-44-5vii

Use the NN Predictive Controller Block . . . . . . . . . . . . . . .4-6NARMA-L2 (Feedback Linearization) Control . . . . . . . .Identification of the NARMA-L2 Model . . . . . . . . . . . . . . . .NARMA-L2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Use the NARMA-L2 Controller Block . . . . . . . . . . . . . . . . .4-144-144-164-18Model Reference Control . . . . . . . . . . . . . . . . . . . . . . . . . . .Use the Model Reference Controller Block . . . . . . . . . . . . .4-234-24Import and Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Import and Export Networks . . . . . . . . . . . . . . . . . . . . . . . .Import and Export Training Data . . . . . . . . . . . . . . . . . . . .4-314-314-35Radial Basis Networks5viiiContentsIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Radial Basis Functions . . . . . . . . . . . . . . . . . . . .5-25-2Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Exact Design (newrbe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .More Efficient Design (newrb) . . . . . . . . . . . . . . . . . . . . . . .Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35-35-45-55-75-8Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Design (newpnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-105-105-11Generalized Regression Networks . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Design (newgrnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-135-135-15

Self-Organizing and Learning VectorQuantization Nets6Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Self-Organizing and LVQ Functions . . . . . . . . .6-26-2Competitive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Competitive Neural Network (competlayer) . . .Kohonen Learning Rule (learnk) . . . . . . . . . . . . . . . . . . . . .Bias Learning Rule (learncon) . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Graphical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-36-36-46-56-56-66-8Self-Organizing Feature Maps . . . . . . . . . . . . . . . . . . . . . .Topologies (gridtop, hextop, randtop) . . . . . . . . . . . . . . . . . .Distance Functions (dist, linkdist, mandist, boxdist) . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Create a Self-Organizing Map Neural Network(selforgmap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training (learnsomb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-106-116-156-18Learning Vector Quantization Networks . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating an LVQ Network . . . . . . . . . . . . . . . . . . . . . . . . . .LVQ1 Learning Rule (learnlv1) . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Supplemental LVQ2.1 Learning Rule (learnlv2) . . . . . . . . .6-376-376-386-416-436-456-196-226-25Adaptive Filters and Adaptive Training7Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Adaptive Functions . . . . . . . . . . . . . . . . . . . . . . .7-27-2ix

Linear Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-3Adaptive Linear Network Architecture . . . . . . . . . . . . . .Single ADALINE (linearlayer) . . . . . . . . . . . . . . . . . . . . . . .7-47-4Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . .7-8LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . .7-9Adaptive Filtering (adapt) . . . . . . . . . . . . . . . . . . . . . . . . . .Tapped Delay Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Adaptive Filter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . .Prediction Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Noise Cancelation Example . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Neuron Adaptive Filters . . . . . . . . . . . . . . . . . . . .7-107-107-107-117-147-157-17Advanced Topics8xContentsParallel and GPU Computing . . . . . . . . . . . . . . . . . . . . . . .Modes of Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Distributed Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Single GPU Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Distributed GPU Computing . . . . . . . . . . . . . . . . . . . . . . . .Parallel Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Parallel Availability, Fallbacks, and Feedback . . . . . . . . . .8-28-28-38-68-98-118-11Speed and Memory Optimizations . . . . . . . . . . . . . . . . . . .Memory Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fast Elliot Sigmoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-148-148-14Multilayer Training Speed and Memory . . . . . . . . . . . . .SIN Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .PARITY Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ENGINE Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CANCER Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CHOLESTEROL Data Set . . . . . . . . . . . . . . . . . . . . . . . . . .8-178-188-208-238-258-27

DIABETES Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-308-32Improving Generalization . . . . . . . . . . . . . . . . . . . . . . . . . .Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Index Data Division (divideind) . . . . . . . . . . . . . . . . . . . . . .Random Data Division (dividerand) . . . . . . . . . . . . . . . . . .Block Data Division (divideblock) . . . . . . . . . . . . . . . . . . . .Interleaved Data Division (divideint) . . . . . . . . . . . . . . . . .Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary and Discussion of Early Stopping andRegularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Posttraining Analysis (postreg) . . . . . . . . . . . . . . . . . . . . . .8-348-358-368-368-378-378-37Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Custom Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-468-468-478-57.8-60Custom Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-61Additional Toolbox Functions8-418-43Historical Networks9Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-2Perceptron Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Perceptron Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .Create a Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Perceptron Learning Rule (learnp) . . . . . . . . . . . . . . . . . . .Training (train) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . .9-39-39-59-69-79-109-16Linear Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-189-18xi

Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . .Linear System Design (newlind) . . . . . . . . . . . . . . . . . . . . .Linear Networks with Delays . . . . . . . . . . . . . . . . . . . . . . . .LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . . .Linear Classification (train) . . . . . . . . . . . . . . . . . . . . . . . . .Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . .9-199-239-239-249-279-299-31Hopfield Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Design (newhop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-349-349-349-36Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-419-41Network Object Reference10Network Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3Subobject Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-7Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9Weight and Bias Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12Subobject Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Input Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiiContents10-1510-1510-1710-2310-2510-2610-28

Bibliography11Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-2Mathematical NotationAMathematical Notation for Equations and Figures . . . .Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Weight Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bias Elements and Vectors . . . . . . . . . . . . . . . . . . . . . . . . .Time and Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Figure and Equation Examples . . . . . . . . . . . . . . . . . . . . . .A-2A-2A-2A-2A-2A-2A-3A-3Mathematics and Code Equivalents . . . . . . . . . . . . . . . . .Mathematics Notation to MATLAB Notation . . . . . . . . . . .Figure Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-4A-4A-4Blocks for the Simulink EnvironmentBBlock Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Transfer Function Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . .Net Input Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Weight Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Processing Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B-2B-2B-3B-3B-4Block Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Suggested Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B-5B-5B-7xiii

Code NotesCDimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .C-2Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Utility Function Variables . . . . . . . . . . . . . . . . . . . . . . . . . .C-3C-4Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .C-6Code Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .C-7Argument Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .C-8IndexxivContents

Neural Network ToolboxDesign BookThe developers of the Neural Network Toolbox software have writtena textbook, Neural Network Design (Hagan, Demuth, and Beale, ISBN0-9717321-0-8). The book presents the theory of neural networks, discussestheir design and application, and makes considerable use of the MATLAB environment and Neural Network Toolbox software. Example programs fromthe book are used in various chapters of this user’s guide. (You can find allthe book example programs in the Neural Network Toolbox software bytyping nnd.)Obtain this book from John Stovall at (303) 492-3648, or by email atJohn.Stovall@colorado.edu.The Neural Network Design textbook includes: An Instructor’s Manual for those who adopt the book for a class Transparency Masters for class useIf you are teaching a class and want an Instructor’s Manual (with solutionsto the book exercises), contact John Stovall at (303) 492-3648, or by email atJohn.Stovall@colorado.eduTo look at sample chapters of the book and to obtain Transparency Masters,go directly to the Neural Network Design page at:http://hagan.okstate.edu/nnd.htmlxv

Neural Network Toolbox Design BookFrom this link, you can obtain sample book chapters in PDF format and youcan download the Transparency Masters by clicking Transparency Masters(3.6MB).You can get the Transparency Masters in PowerPoint or PDF format.xvi

1Network Objects, Data, andTraining Styles “Introduction” on page 1-2 “Neuron Model” on page 1-4 “Network Architectures” on page 1-10 “Network Object” on page 1-16 “Configuration” on page 1-21 “Data Structures” on page 1-24 “Training Styles (Adapt and Train)” on page 1-30

1Network Objects, Data, and Training StylesIntroductionThe work flow for the neural network design process has seven primary steps:1 Collect data2 Create the network3 Configure the network4 Initialize the weights and biases5 Train the network6 Validate the network7 Use the networkThis topic discusses the basic ideas behind steps 2, 3, 5, and 7. The detailsof these steps come in later topics, as do discussions of steps 4 and 6,since the fine points are specific to the type of network that you are using.(Data collection in step 1 generally occurs outside the framework of NeuralNetwork Toolbox software, but it is discussed in “Multilayer Networks andBackpropagation Training” on page 2-2.)The Neural Network Toolbox software uses the network object to store all ofthe information that defines a neural network. This topic describes the basiccomponents of a neural network and shows how they are created and storedin the network object.After a neural network has been created, it needs to be configured andthen trained. Configuration involves arranging the network so that it iscompatible with the problem you want to solve, as defined by sample data.After the network has been configured, the adjustable network parameters(called weights and biases) need to be tuned, so that the network performanceis optimized. This tuning process is referred to as training the network.Configuration and training require that the network be provided withexample data. This topic shows how to format the data for presentation to thenetwork. It also explains network configuration and the two forms of networktraining: incremental training and batch training.1-2

IntroductionThere are four different levels at which the Neural Network Toolbox softwarecan be used. The first level is represented by the GUIs that are described in“Getting Started with Neural Network Toolbox”. These provide a quick way toaccess the power of the toolbox for many problems of function fitting, patternrecognition, clustering and time series analysis.The second level of toolbox use is through basic command-line operations. Thecommand-line functions use simple argument lists with intelligent defaultsettings for function parameters. (You can override all of the default settings,for increased functionality.) This topic, and the ones that follow, concentrateon command-line operations.The GUIs described in Getting Started can automatically generate MATLABcode files with the command-line implementation of the GUI operations. Thisprovides a nice introduction to the use of the command-line functionality.A third level of toolbox use is customization of the toolbox. This advancedcapability allows you to create your own custom neural networks, while stillhaving access to the full functionality of the toolbox.The fourth level of toolbox usage is the ability to modify any of the M-filescontained in the toolbox. Every computational component is written inMATLAB code and is fully accessible.The first level of toolbox use (through the GUIs) is described in GettingStarted which also introduces command-line operations. The following topicswill discuss the command-line operations in more detail. The customization ofthe toolbox is described in “Define Network Architectures”.1-3

1Network Objects, Data, and Training StylesNeuron ModelSimple NeuronThe fundamental building block for neural networks is the single-inputneuron, such as this example.There are three distinct functional operations that take place in this exampleneuron. First, the scalar input p is multiplied by the scalar weight w to formthe product wp, again a scalar. Second, the weighted input wp is added tothe scalar bias b to form the net input n. (In this case, you can view the biasas shifting the function f to the left by an amount b. The bias is much likea weight, except that it has a constant input of 1.) Finally, the net input ispassed through the transfer function f, which produces the scalar output a.The names given to these three processes are: the weight function, the netinput function and the transfer function.For many types of neural networks, the weight function is a product of aweight times the input, but other weight functions (e.g., the distance betweenthe weight and the input, w p ) are sometimes used. (For a list of weightfunctions, type help nnweight.) The most common net input function isthe summation of the weighted inputs with the bias, but other operations,such as multiplication, can be used. (For a list of net input functions, typehelp nnnetinput.) “Introduction” on page 5-2 discusses how distance canbe used as the weight function and multiplication can be used as the netinput function. There are also many types of transfer functions. Examplesof various transfer functions are in “Transfer Functions” on page 1-5. (For alist of transfer functions, type help nntransfer.)1-4

Neuron ModelNote that w and b are both adjustable scalar parameters of the neuron. Thecentral idea of neural networks is that such parameters can be adjusted sothat the network exhibits some desired or interesting behavior. Thus, youcan train the network to do a particular job by adjusting the weight or biasparameters.All the neurons in the Neural Network Toolbox software have provision for abias, and a bias is used in many of the examples and is assumed in most ofthis toolbox. However, you can omit a bias in a neuron if you want.Transfer FunctionsMany transfer functions are included in the Neural Network Toolbox software.Two of the most commonly used functions are shown below.The following figure illustrates the linear transfer function.Neurons of this type are used in the final layer of multilayer networks thatare used as function approximators. This is shown in “Multilayer Networksand Backpropagation Training” on page 2-2.The sigmoid transfer function shown below takes the input, which can haveany value between plus and minus infinity, and squashes the output intothe range 0 to 1.1-5

1Network Objects, Data, and T

service@mathworks.com Order status, license renewals, passcodes info@mathworks.com Sales, pricing, and general information 508-647-7000 (Phone) 508-647-7001 (Fax) The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-2098 For contact information about worldwide offices, see the MathWorks Web site. Neural Network Toolbox User's Guide

Related Documents:

Neural Network Based System Identification Toolbox User’s Guide 1-1 1 Tutorial The present toolbox: “Neural Network Based System Identification Toolbox”, contains a large number of functions for training and evaluation of multilayer perceptron type neural networks. The

Model-Based Calibration Toolbox 13, 21, 23, 24, 27 500 600 Control System Design and Analysis Control System Toolbox 200 200 System Identification Toolbox 200 200 Fuzzy Logic Toolbox 200 200 Robust Control Toolbox 4 200 200 Model Predictive Control Toolbox 4 200 23200 Aerospace Toolbox 200 200 Signal Processing and Communications

environment and Neural Network Toolbox software. Demonstration programs from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) Obtain this book from John Stovall at (303) 492-3648, or by email at John.Stovall@colorado.edu.

from the book are used in various chapters of this user s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

The toolbox is designed to work with both resting state scans and block designs. Installing the toolbox: 1. download conn.zip, unzip the file. 2. add ./conn/ directory to matlab path To start the toolbox: On the matlab prompt, type : conn (make sure your matlab path include the path to the connectivity toolbox)

and to review all NIH Toolbox measures as they relate to the needs of young children. NIH TOOLBOX APP Released in 2015, the NIH Toolbox iPad app takes advantage of portable technology and advanced software to provide users with a exible and easy-to-use NIH Toolbox test administration and data collection system. App

neural networks and substantial trials of experiments to design e ective neural network structures. Thus we believe that the design of neural network structure needs a uni ed guidance. This paper serves as a preliminary trial towards this goal. 1.1. Related Work There has been extensive work on the neural network structure design. Generic algorithm (Scha er et al.,1992;Lam et al.,2003) based .