Neural Network Toolbox User’s Guide

3y ago
23 Views
2 Downloads
2.19 MB
404 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Sutton Moon
Transcription

Neural Network Toolbox User’s GuideR2011bMark Hudson BealeMartin T. HaganHoward B. Demuth

How to Contact MathWorksWebNewsgroupwww.mathworks.com/contact TS.html Technical service@mathworks.cominfo@mathworks.comProduct enhancement suggestionsBug reportsDocumentation error reportsOrder status, license renewals, passcodesSales, pricing, and general information508-647-7000 (Phone)508-647-7001 (Fax)The MathWorks, Inc.3 Apple Hill DriveNatick, MA 01760-2098For contact information about worldwide offices, see the MathWorks Web site.Neural Network Toolbox User’s Guide COPYRIGHT 1992–2011 by The MathWorks, Inc.The software described in this document is furnished under a license agreement. The software may be usedor copied only under the terms of the license agreement. No part of this manual may be photocopied orreproduced in any form without prior written consent from The MathWorks, Inc.FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentationby, for, or through the federal government of the United States. By accepting delivery of the Programor Documentation, the government hereby agrees that this software or documentation qualifies ascommercial computer software or commercial computer software documentation as such terms are usedor defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms andconditions of this Agreement and only those rights specified in this Agreement, shall pertain to and governthe use, modification, reproduction, release, performance, display, and disclosure of the Program andDocumentation by the federal government (or other entity acquiring for or through the federal government)and shall supersede any conflicting contractual terms or conditions. If this License fails to meet thegovernment’s needs or is inconsistent in any respect with federal procurement law, the government agreesto return the Program and Documentation, unused, to The MathWorks, Inc.TrademarksMATLAB and Simulink are registered trademarks of The MathWorks, Inc. Seewww.mathworks.com/trademarks for a list of additional trademarks. Other product or brandnames may be trademarks or registered trademarks of their respective holders.PatentsMathWorks products are protected by one or more U.S. patents. Please seewww.mathworks.com/patents for more information.

Revision HistoryJune 1992April 1993January 1997July 1997January 1998September 2000June 2001July 2002January 2003June 2004October 2004October 2004March 2005March 2006September 2006March 2007September 2007March 2008October 2008March 2009September 2009March 2010September 2010April 2011September 2011First printingSecond printingThird printingFourth printingFifth printingSixth printingSeventh printingOnline onlyOnline onlyOnline onlyOnline onlyEighth printingOnline onlyOnline onlyNinth printingOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyOnline onlyRevised for Version 3 (Release 11)Revised for Version 4 (Release 12)Minor revisions (Release 12.1)Minor revisions (Release 13)Minor revisions (Release 13SP1)Revised for Version 4.0.3 (Release 14)Revised for Version 4.0.4 (Release 14SP1)Revised for Version 4.0.4Revised for Version 4.0.5 (Release 14SP2)Revised for Version 5.0 (Release 2006a)Minor revisions (Release 2006b)Minor revisions (Release 2007a)Revised for Version 5.1 (Release 2007b)Revised for Version 6.0 (Release 2008a)Revised for Version 6.0.1 (Release 2008b)Revised for Version 6.0.2 (Release 2009a)Revised for Version 6.0.3 (Release 2009b)Revised for Version 6.0.4 (Release 2010a)Revised for Version 7.0 (Release 2010b)Revised for Version 7.0.1 (Release 2011a)Revised for Version 7.0.2 (Release 2011b)

ContentsNeural Network Toolbox Design BookNetwork Objects, Data, and Training Styles1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simple Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron with Vector Input . . . . . . . . . . . . . . . . . . . . . . . . . . .1-41-41-51-6Network Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .One Layer of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Layers of Neurons . . . . . . . . . . . . . . . . . . . . . . . . .Input and Output Processing Functions . . . . . . . . . . . . . . .1-101-101-121-14Network Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-16Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-21Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a Static Network . .Simulation with Sequential Inputs in a DynamicNetwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a DynamicNetwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-231-23Training Styles (Adapt and Train) . . . . . . . . . . . . . . . . . . .Incremental Training with adapt . . . . . . . . . . . . . . . . . . . . .Batch Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-291-291-321-351-241-26v

Multilayer Networks and BackpropagationTraining2Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2Multilayer Neural Network Architecture . . . . . . . . . . . .Neuron Model (logsig, tansig, purelin) . . . . . . . . . . . . . . . . .Feedforward Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-32-32-4Collect and Prepare the Data . . . . . . . . . . . . . . . . . . . . . . .Preprocessing and Postprocessing . . . . . . . . . . . . . . . . . . . .Dividing the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-72-72-10Create, Configure, and Initialize the Network . . . . . . . .Other Related Architectures . . . . . . . . . . . . . . . . . . . . . . . . .Initializing Weights (init) . . . . . . . . . . . . . . . . . . . . . . . . . . .2-132-142-14Train the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Efficiency and Memory Reduction . . . . . . . . . . . . . . . . . . . .Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-162-172-192-192-20Post-Training Analysis (Network Validation) . . . . . . . . .Improving Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-242-28Use the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-30Automatic Code Generation . . . . . . . . . . . . . . . . . . . . . . . .2-31.2-32Limitations and CautionsviContents

Dynamic Networks3Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Examples of Dynamic Networks . . . . . . . . . . . . . . . . . . . . .Applications of Dynamic Networks . . . . . . . . . . . . . . . . . . .Dynamic Network Structures . . . . . . . . . . . . . . . . . . . . . . . .Dynamic Network Training . . . . . . . . . . . . . . . . . . . . . . . . .3-23-33-93-93-11Focused Time-Delay Neural Network (timedelaynet) . .3-13Preparing Data (preparets) . . . . . . . . . . . . . . . . . . . . . . . . .3-18Distributed Time-Delay Neural Network(distdelaynet) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-20NARX Network (narxnet, closeloop) . . . . . . . . . . . . . . . . .3-23Layer-Recurrent Network (layrecnet) . . . . . . . . . . . . . . .3-29Training Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . .3-31Multiple Sequences, Time-Series Utilities, and ErrorWeighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Time-Series Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Error Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-373-373-383-40Control Systems4Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2NN Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . .System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-44-44-5vii

.4-6NARMA-L2 (Feedback Linearization) Control . . . . . . . .Identification of the NARMA-L2 Model . . . . . . . . . . . . . . . .NARMA-L2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the NARMA-L2 Controller Block . . . . . . . . . . . . . . .4-144-144-164-18Model Reference Control . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the Model Reference Controller Block . . . . . . . . . . . .4-234-24Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . .Importing and Exporting Networks . . . . . . . . . . . . . . . . . . .Importing and Exporting Training Data . . . . . . . . . . . . . . .4-314-314-35Using the NN Predictive Controller BlockRadial Basis Networks5viiiContentsIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Radial Basis Functions . . . . . . . . . . . . . . . . . . . .5-25-2Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Exact Design (newrbe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .More Efficient Design (newrb) . . . . . . . . . . . . . . . . . . . . . . .Demonstrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35-35-45-55-75-8Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Design (newpnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-105-105-11Generalized Regression Networks . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Design (newgrnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-135-135-15

Self-Organizing and Learning VectorQuantization Nets6Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Self-Organizing and LVQ Functions . . . . . . . . .6-26-2Competitive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Competitive Neural Network . . . . . . . . . . . . . . .Kohonen Learning Rule (learnk) . . . . . . . . . . . . . . . . . . . . .Bias Learning Rule (learncon) . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Graphical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-36-36-46-56-56-66-8Self-Organizing Feature Maps . . . . . . . . . . . . . . . . . . . . . .Topologies (gridtop, hextop, randtop) . . . . . . . . . . . . . . . . . .Distance Functions (dist, linkdist, mandist, boxdist) . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Self-Organizing MAP Neural Network(newsom) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training (learnsomb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-106-116-156-18Learning Vector Quantization Networks . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating an LVQ Network . . . . . . . . . . . . . . . . . . . . . . . . . .LVQ1 Learning Rule (learnlv1) . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Supplemental LVQ2.1 Learning Rule (learnlv2) . . . . . . . . .6-376-376-386-416-436-456-196-216-24Adaptive Filters and Adaptive Training7Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Important Adaptive Functions . . . . . . . . . . . . . . . . . . . . . . .7-27-2ix

Linear Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-3Adaptive Linear Network Architecture . . . . . . . . . . . . . .Single ADALINE (newlin) . . . . . . . . . . . . . . . . . . . . . . . . . .7-47-4Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . .7-7LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . .7-8Adaptive Filtering (adapt) . . . . . . . . . . . . . . . . . . . . . . . . . .Tapped Delay Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Adaptive Filter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . .Prediction Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Noise Cancelation Example . . . . . . . . . . . . . . . . . . . . . . . . .Multiple Neuron Adaptive Filters . . . . . . . . . . . . . . . . . . . .7-97-97-97-107-137-147-16Advanced Topics8Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Custom Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-28-28-38-13.8-16Additional Toolbox FunctionsSpeed and Memory Comparison for Training MultilayerNetworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .SIN Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .PARITY Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ENGINE Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CANCER Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .CHOLESTEROL Data Set . . . . . . . . . . . . . . . . . . . . . . . . . .DIABETES Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xContents8-178-188-208-238-258-278-308-32

Improving Generalization . . . . . . . . . . . . . . . . . . . . . . . . . .Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Index Data Division (divideind) . . . . . . . . . . . . . . . . . . . . . .Random Data Division (dividerand) . . . . . . . . . . . . . . . . . .Block Data Division (divideblock) . . . . . . . . . . . . . . . . . . . .Interleaved Data Division (divideint) . . . . . . . . . . . . . . . . .Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary and Discussion of Early Stopping andRegularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Posttraining Analysis (postreg) . . . . . . . . . . . . . . . . . . . . . .8-348-358-368-368-378-378-37Custom Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-468-418-43Historical Networks9Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-2Perceptron Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Perceptron Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Perceptron (newp) . . . . . . . . . . . . . . . . . . . . . . . .Perceptron Learning Rule (learnp) . . . . . . . . . . . . . . . . . . .Training (train) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . .9-39-39-59-69-79-109-16Linear Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . .Linear System Design (newlind) . . . . . . . . . . . . . . . . . . . . .Linear Networks with Delays . . . . . . . . . . . . . . . . . . . . . . . .LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . . .Linear Classification (train) . . . . . . . . . . . . . . . . . . . . . . . . .Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . .9-189-189-199-239-239-249-279-299-31Hopfield Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-349-349-34xi

Design (newhop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-36Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-419-41Network Object Reference10Network Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3Subobject Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-7Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9Weight and Bias Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-12Subobject Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Input Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiiContents11-2

Mathematical NotationAMathematical Notation for Equations and Figures . . . .Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Weight Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bias Elements and Vectors . . . . . . . . . . . . . . . . . . . . . . . . .Time and Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Figure and Equation Examples . . . . . . . . . . . . . . . . . . . . . .A-2A-2A-2A-2A-2A-2A-3A-3Mathematics and Code Equivalents .

environment and Neural Network Toolbox software. Demonstration programs from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) Obtain this book from John Stovall at (303) 492-3648, or by email at John.Stovall@colorado.edu.

Related Documents:

Neural Network Based System Identification Toolbox User’s Guide 1-1 1 Tutorial The present toolbox: “Neural Network Based System Identification Toolbox”, contains a large number of functions for training and evaluation of multilayer perceptron type neural networks. The

Model-Based Calibration Toolbox 13, 21, 23, 24, 27 500 600 Control System Design and Analysis Control System Toolbox 200 200 System Identification Toolbox 200 200 Fuzzy Logic Toolbox 200 200 Robust Control Toolbox 4 200 200 Model Predictive Control Toolbox 4 200 23200 Aerospace Toolbox 200 200 Signal Processing and Communications

from the book are used in various chapters of this user s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

from the book are used in various chapters of this user’s guide. (You can find all the book demonstration programs in the Neural Network Toolbox software by typing nnd.) This book can be obtained from John Stovall at (303) 492-3648, or by e-mail at John.Stovall@colorado.edu. The Neural Network Design textbook includes:

The toolbox is designed to work with both resting state scans and block designs. Installing the toolbox: 1. download conn.zip, unzip the file. 2. add ./conn/ directory to matlab path To start the toolbox: On the matlab prompt, type : conn (make sure your matlab path include the path to the connectivity toolbox)

and to review all NIH Toolbox measures as they relate to the needs of young children. NIH TOOLBOX APP Released in 2015, the NIH Toolbox iPad app takes advantage of portable technology and advanced software to provide users with a exible and easy-to-use NIH Toolbox test administration and data collection system. App

neural networks and substantial trials of experiments to design e ective neural network structures. Thus we believe that the design of neural network structure needs a uni ed guidance. This paper serves as a preliminary trial towards this goal. 1.1. Related Work There has been extensive work on the neural network structure design. Generic algorithm (Scha er et al.,1992;Lam et al.,2003) based .

SLT for Automotive Devices -A Thermal Perspective 6 System Level Test Paradigm System Level Testwith high parallelismand thermal management. TestConX 2020 Heating Up -Thermal Session 7 Presentation 4 TestConX Workshop www.testconx.org May 11-13, 2020 SLT for Automotive Devices -A Thermal Perspective 7 Our Approach to Address The Challenges 1. Modular, Massively Parallel 2. Scalable Active .