1y ago

9 Views

1 Downloads

1.51 MB

103 Pages

Transcription

Automatic DifferentiationAutomatic DifferentiationHamid Reza Ghaffari , Jonathan Li, Yang Li, Zhenghua NieInstructor: Prof. Tamas TerlakySchool of Computational Engineering and SchoolMcMaster UniversityMarch. 23, 2007

Automatic DifferentiationOutline123IntroductionsForward and Reverse ModeForward methodsReverse methodsComparisonExtended knowledgeCase StudyComplexity AnalysisForward ModeComplexityReverse ModeComplexity4AD SoftwaresAD tools in MATLABAD in C/C (ADIC)DevelopersintroductionADIS AnatomyADICProcessExampleHandling Side EffectsReferences

Automatic DifferentiationIntroductionsWhy Do we Need Derivatives?Optimization via gradient method.Unconstrained Optimization minimize y f (x) requiresgradient or hessian.Constrained Optimization minimize y f (x) such thatc(x) 0 also requires Jacobian Jc(x) [ cj / xi ].Solution of Nonlinear Equations f (x) 0 by NewtonMethod f (x n ) 1n 1nx x f (x n ) xrequires Jacobian JF [ f / x].Parameter Estimation, Data Assimilation, SensitivityAnalysis, Inverse Problem, .

Automatic DifferentiationIntroductionsWhy Do we Need Derivatives?Optimization via gradient method.Unconstrained Optimization minimize y f (x) requiresgradient or hessian.Constrained Optimization minimize y f (x) such thatc(x) 0 also requires Jacobian Jc(x) [ cj / xi ].Solution of Nonlinear Equations f (x) 0 by NewtonMethod f (x n ) 1n 1nx x f (x n ) xrequires Jacobian JF [ f / x].Parameter Estimation, Data Assimilation, SensitivityAnalysis, Inverse Problem, .

Automatic DifferentiationIntroductionsWhy Do we Need Derivatives?Optimization via gradient method.Unconstrained Optimization minimize y f (x) requiresgradient or hessian.Constrained Optimization minimize y f (x) such thatc(x) 0 also requires Jacobian Jc(x) [ cj / xi ].Solution of Nonlinear Equations f (x) 0 by NewtonMethod f (x n ) 1n 1nx x f (x n ) xrequires Jacobian JF [ f / x].Parameter Estimation, Data Assimilation, SensitivityAnalysis, Inverse Problem, .

Automatic DifferentiationIntroductionsHow Do We Obtain Derivatives?Reliability: the correctness and numerical accuracy of thederivative results;Computational Cost: the amount of runtime and memoryrequired for the derivative code;Development Time: the time it takes to design,implement, and verify the derivative code, beyond the timeto implement the code for the computation of underlyingfunction.

Automatic DifferentiationIntroductionsHow Do We Obtain Derivatives?Reliability: the correctness and numerical accuracy of thederivative results;Computational Cost: the amount of runtime and memoryrequired for the derivative code;Development Time: the time it takes to design,implement, and verify the derivative code, beyond the timeto implement the code for the computation of underlyingfunction.

Automatic DifferentiationIntroductionsHow Do We Obtain Derivatives?Reliability: the correctness and numerical accuracy of thederivative results;Computational Cost: the amount of runtime and memoryrequired for the derivative code;Development Time: the time it takes to design,implement, and verify the derivative code, beyond the timeto implement the code for the computation of underlyingfunction.

Automatic DifferentiationIntroductionsMain ApproachesHand CodingDivided DifferencesSymbolic DifferentiationAutomatic Differentiation

Automatic DifferentiationIntroductionsHand CodingAn analytic expression for the derivative is identified first andthen implemented by hand using any high-level programminglanguage.AdvantagesAccuracy up to machine precision, if care is taken.Highly-optimized implementation depending on the skill ofthe implementer.DisadvantagesOnly applicable for "simple" functions and error-prone.Requires considerable human effort.

Automatic DifferentiationIntroductionsHand CodingAn analytic expression for the derivative is identified first andthen implemented by hand using any high-level programminglanguage.AdvantagesAccuracy up to machine precision, if care is taken.Highly-optimized implementation depending on the skill ofthe implementer.DisadvantagesOnly applicable for "simple" functions and error-prone.Requires considerable human effort.

Automatic DifferentiationIntroductionsHand CodingAn analytic expression for the derivative is identified first andthen implemented by hand using any high-level programminglanguage.AdvantagesAccuracy up to machine precision, if care is taken.Highly-optimized implementation depending on the skill ofthe implementer.DisadvantagesOnly applicable for "simple" functions and error-prone.Requires considerable human effort.

Automatic DifferentiationIntroductionsDivided DifferencesApproximate the derivative of a function f w.r.t the ithcomponent of x at a particular point x0 by differencenumerically, e.g f (x) xi x0f (x0 hei ) f (x0 )hwhere ei is the ith Cartesian unit vector.

Automatic DifferentiationIntroductionsDivided Differences(Ctd.) f (x) xi x0f (x0 hei ) f (x0 )hAdvantage:only f is needed, easy to be implemented, used as a "blackbox"easy to parallelizeDisadvantage:Accuracy hard to assess, depending on the choice of hComputational complexity bounded below: (n 1) cost(f )

Automatic DifferentiationIntroductionsDivided Differences(Ctd.) f (x) xi x0f (x0 hei ) f (x0 )hAdvantage:only f is needed, easy to be implemented, used as a "blackbox"easy to parallelizeDisadvantage:Accuracy hard to assess, depending on the choice of hComputational complexity bounded below: (n 1) cost(f )

Automatic DifferentiationIntroductionsDivided Differences(Ctd.) f (x) xi x0f (x0 hei ) f (x0 )hAdvantage:only f is needed, easy to be implemented, used as a "blackbox"easy to parallelizeDisadvantage:Accuracy hard to assess, depending on the choice of hComputational complexity bounded below: (n 1) cost(f )

Automatic DifferentiationIntroductionsSymbolic DifferentiationFind an explicit derivative expression by computer algebrasystems.Disadvantages:The length of the representation of the resulting derivativeexpressions increases rapidly with the number, n, ofindependent variables;Inefficient in terms of computing time due to the rapidgrowth of the underlying expressions;Unable to deal with constructs such as branches, loops, orsubroutines that are inherent in computer codes.

Automatic DifferentiationIntroductionsAutomatic DifferentiationWhat is Automatic Differentiation?Algorithmic, or automatic, differentiation (AD) is concernedwith the accurate and efficient evaluation of derivatives forfunctions defined by computer programs. No truncationerrors are incurred, and the resulting numerical derivativevalues can be used for all scientific computations that arebased on linear, quadratic, or even higher orderapproximations to nonlinear scalar or vector functions.

Automatic DifferentiationIntroductionsAutomatic Differentiation (Cont.)What’s the idea behind Automatic Differentiation?Automatic differentiation techniques rely on the fact thatevery function no matter how complicated is executed on acomputer as a (potentially very long) sequence ofelementary operations such as additions, multiplications,and elementary functions such as sin and cos. Byrepeated application of the chain rule of derivative calculusto the composition of those elementary operations, onecan computes in a completely mechanical fashion.

Automatic DifferentiationIntroductionsHow good AD is?ReliabilityAccurate to machine precision, no truncation error exists.Computational CostForward Mode: 2 3n cost(f )Reverse Mode: 5 cost(f )Human EffortSpend less time in preparing a code for differentiation, inparticular in situations where computer models are boundto change frequently.

Automatic DifferentiationIntroductionsHow widely is AD used?Sensitivity Analysis of a Mesoscale Weather ModelApplication Area: Climate ModelingData assimilation for ocean circulationApplication Area: OceanographyIntensity Modulated Radiation TherapyApplication Area: BiomedicineMultidisciplinary Design of AircraftApplication Area: Computational Fluid DynamicsThe NEOS serverApplication Area: Optimization.Source: http://www.autodiff.org/?module Applications&submenu & category all

Automatic DifferentiationForward and Reverse ModeAD methods : SimpleExample

Automatic DifferentiationForward and Reverse ModeSimpleExampleUnify all the variable.

Automatic DifferentiationForward and Reverse ModeForward methodForward methodDifferentiate the Code:ui xi i 1, .n,ui Φ({uj }j i ) i n 1, ., NDifferentiate: ui ei i 1, ., n ui Xj ici,j uj i n 1, ., N

Automatic DifferentiationForward and Reverse ModeReverse methodReverse methodCompute the Adjoint of the Codeuj y (y1 , y2 , .ym ) uj ujCompute for dependent variablesu n p j (y1 , y2 , .ym ) ej j 1, ., m ujCompute for intermediates and independents uj , j n p, ., 1uj X y ui ci,j uji j

Automatic DifferentiationForward and Reverse ModeForward methodsForward methodsForward methodMethod : Compute the gradient of each variable, and usethe chain rule to pass the gradientThe size of computed object: In each computation, itcomputes the vectors with input size n.The computation of gradient of each variable proceedswith the computation of each variableEasily implement

Automatic DifferentiationForward and Reverse ModeForward methodsForward methodsComputing Variable Value Computing Gradient Value

Automatic DifferentiationForward and Reverse ModeReverse methodsReverse methodsReverse methodMethod : Compute Adjoint of each variable, pass theAdjointThe size of computed object: In each computation, itcomputes the vectors with output size m. (Note,usually theoutput size is 1 in optimization application.)The computation of Adjoint of each variable proceed afterthe completion of the computation of all variables.

Automatic DifferentiationForward and Reverse ModeReverse methodsReverse methodsReverse methodTraverse through the Computational Graph reversely andget the parents of each variable so as to compute theAdjoint.Obtain the gradient by compute each partial deriviate oneby oneHarder to implement

Automatic DifferentiationForward and Reverse ModeReverse methodsReverse methodsComputing Variable Value Computing Adjoint Value

Automatic DifferentiationForward and Reverse ModeReverse methodsImplementation of Reverse modeImplementation of Reverse modeAs mentioned above, the implementation in Forward modeis relatively straightforward. We only propose thecomparison of important feature between SourceTransformation and Operator Overloading:Using Source Transformation: Re-ordering the code upsidedownUsing Operator Overloading: Record computation on a"tape"

Automatic DifferentiationForward and Reverse ModeReverse methodsImplementation of Reverse modeRe-ordering the code upside down:

Automatic DifferentiationForward and Reverse ModeReverse methodsImplementation of Reverse modeRecord computation on a "tape"Record:Operation,operandsRelated technique: CheckpointingIf the number of operations going large, Checkpointingprevent the program from exhausting all the memory

Automatic DifferentiationForward and Reverse ModeComparisonComparisonThe following topic is discussed in the comparisonbetween Forward mode and backward modeComputational ComplexityMemory RequiredTime to develop

Automatic DifferentiationForward and Reverse ModeComparisonCost of Forward Propagation of Derivs. N c 1 : No. of unit local derivatives ci,j 1N c 6 1 : No. of nonunit local derivatives ci,j 6 0, 1Solve for derivatives in forward order 5un 1 , 5un 2 , . . . , 5uNX5ui ci,j 5uj , i n 1, . . . , N,Definej iwith each 5ui ( ui / x1 , . . . , ui / xn ), a length n vector.Flop count flops(fwd) given by,flops(fwd) nN c 6 1(mults.ci,j 5uj , ci,j 6 1, 0) n(N c 6 1 N c 1 ) (adds./subs. ci,j 5 uj ) n(p m)(first n adds./subs.)flops(fwd) n(2N c 6 1 N c 1 p m)

Automatic DifferentiationForward and Reverse ModeComparisonCost of Reverse Propagation of AdjointsSolve for adjoints in reverse order ūn p , ūn p 1 , . . . , ū1ūj Xūi ci,j .i jwith ūj uj (y1 , y2 , . . . , ym )is a length m vector.Flop count flops(rev ) given by,flops(rev ) mN c 6 1(mults.ūi ci,j , ci,j 6 1, 0) m(N c 1 N c 6 1 ) (adds./subs. (ūi ci,j ))flops(rev ) m(2N c 6 1 N c 1 ).

Automatic DifferentiationForward and Reverse ModeComparisonMemory RequiredUsed Storage:It’s uncertain that which mode takes more memory,usually, reverse mode takes more.The cost of memory for Forward mode is from:Storing size (1) in each variableStoring input size n in each gradient variableThe cost of memory for Reverse mode is from:Storing size (1) in each variableStoring output size m in each Adjoint variableStoring DAG(directed acyclic graph,which present thefunction)

Automatic DifferentiationForward and Reverse ModeComparisonMemory RequiredIt’s more likely to have less memory used while usingforward mode:1.If there exists reused variable in original function2.If n is so large that Reverse requires lots of memory tostore DAG.It’s more likely to have less memory used while usingreverse mode:1.If n is relatively large, so the storage required for storinggradient is more than storing Adjoint

Automatic DifferentiationForward and Reverse ModeComparisonTime to developTime to develop: Usually, it’s hard to develop Reversecode than Forward one, especially using SourceTransformation technique.

Automatic DifferentiationForward and Reverse ModeComparisonTime to developConclusion:Using Forward mode when n m, such as optimizationUsing Reverse mode when m n, such as SensitivityAnalysis

Automatic DifferentiationForward and Reverse ModeExtended knowledgeExtended knowledgeDirectional DerivativesForward mode:seed d (d1 , .dn )Tseeding xi dicalculates Jf dMulti-directional derivatives : replace d by D,whereD [dij ]i 1,.n,j 1,.q

Automatic DifferentiationForward and Reverse ModeExtended knowledgeExtended knowledgeDirectional AdjointsReverse mode:seed v (v1 , .vm )seeding y j vjcalculates v JfMulti-directional Adjoint : replace v by V,whereV [vij ]i 1,.q,j 1,.m

Automatic DifferentiationForward and Reverse ModeCase StudyCase StudyUsing FADBAD :FADBAD were developed by Ole Stauning and ClausBendtsen.Flexible automatic differentiation using templates andoperator overloading in ANSI C Only with source code, no additional library required.Free to use

Automatic DifferentiationForward and Reverse ModeCase StudyCase StudyUsing FADBAD :QTest function : f (x) xiObjective: Testing different coding of the function inForward mode, try to reuse the variableResult : Basically, no matter how you code,the memorycost as much as n n 8byte , no different between reusevariable or not

Automatic DifferentiationForward and Reverse ModeCase StudyCase StudyUsing FADBAD :QTest function : f (x) xiObjective: Testing Reverse modeResult : test until n 6500 , Using Forward mode out ofmemory. Reverse is 127 times faster, and only take fewMB.Remark : Couldn’t see how the DAG take the memoryfrom using reverse mode, it’s more likely to observe byusing fewer independent variables but more complicatedfunction.

Automatic DifferentiationComplexity AnalysisCode ListCode-List given by re-writing the code into elemental binaryand unary operations/functions, e.g. log 2 (x1 x2 ) x2 x32 a x2y1p y2b · log(x1 x2 ) x2 /x3 x2 x32 av1v2v3v4v5v6 x1 x2 x3 v1 v2 log(v4 ) v32v7 v6 v2v8 v7 av9 1/v3v10 v2 v9v11 b v5v12 v11 v10v13v14v15v16v17 v8 v2 v52 v12 v14 v13 v15 v8

Automatic DifferentiationComplexity AnalysisCode-list (ctd.)Assume code-list containsN addition/substractions e.g v14 v13N multiplications e.g. v1 v2Nf nonlinear functions/operations e.g. log(v4 ), 1/v3Total of p m N N Nf statementsThenEach addition/subtraction generates two ci,j 1Each multiplication generates two ci,j 6 1, 0Each nonlinear function generates one ci,j 6 1, 0 requiringone nonlinear function evaluation e.g. v5 log(v4 ) givesc5,4 1/v4 .So we have,N c 1 2N N c 6 1 2N 1Nf

Automatic DifferentiationComplexity AnalysisForward Mode ComplexityComplexity of Forward Modeflops(Jf ) flops(f ) flops(ci,j ) flops(fwd)Assume flops(nonlinear function) w, w 1.Cost of evaluation function is,flops(f ) N N wNfCost of evaluation local derivatives ci,j is,flops(ci,j ) wNf .Cost of forward propagation of derivatives isflops(fwd) n(2N c 6 1 N c 1 p m) n(3N N Nf )

Automatic DifferentiationComplexity AnalysisForward Mode ComplexityComplexity of Forward Mode (Ctd.)Then for forward modeflops(Jf )flops(f ) 1 wNf n(3N N Nf )N N wNfb nNb n( 1 1 )wNdf 1 3nNwnwhere,b , Nb , wNdf ) (N(N , N , wNf ).N N wNfb Nb wNd f 1 and all coefficients positive,SinceNflops(Jf )11 1 n max(3, 1, ( )) 1 3n.flops(f )wnn m, Forward Mode preferred.

Automatic DifferentiationComplexity AnalysisReverse Mode ComplexityComplexity of Reverse Modeflops(rev ) m(4N 2N 2Nf ),giving,flops(Jf )flops(f )b 2mNb m( 2 1 4mNw1 dm )wN fandflops(Jf )21 1 m max(4, 2, ( )) 1 4mflops(f )wmFor m 1flops(5f ) 5flops(f )

Automatic DifferentiationAD SoftwaresAD tools in MATLABDifferentiation Arithmetic u (u, u 0 ),where u denotes the value of the function u: R R evaluatedat the point x0 , and where u 0 denotes the value u 0 (x0 ). u v u v u v u v x c (u v , u 0 v 0 ) (u v , u 0 v 0 ) (uv , uv 0 u 0 v ) (u/v , u 0 (u/v )v 0 /v ) (x, 1) (c, 0)Ref:http://www.math.uu.se/ warwick/vt07/FMB/avnm1.pdf

Automatic DifferentiationAD SoftwaresAD tools in MATLABDifferentiation Arithmetic u (u, u 0 ),where u denotes the value of the function u: R R evaluatedat the point x0 , and where u 0 denotes the value u 0 (x0 ). u v u v u v u v x c (u v , u 0 v 0 ) (u v , u 0 v 0 ) (uv , uv 0 u 0 v ) (u/v , u 0 (u/v )v 0 /v ) (x, 1) (c, 0)Ref:http://www.math.uu.se/ warwick/vt07/FMB/avnm1.pdf

Automatic DifferentiationAD SoftwaresAD tools in MATLABDifferentiation Arithmetic u (u, u 0 ),where u denotes the value of the function u: R R evaluatedat the point x0 , and where u 0 denotes the value u 0 (x0 ). u v u v u v u v x c (u v , u 0 v 0 ) (u v , u 0 v 0 ) (uv , uv 0 u 0 v ) (u/v , u 0 (u/v )v 0 /v ) (x, 1) (c, 0)Ref:http://www.math.uu.se/ warwick/vt07/FMB/avnm1.pdf

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample of a Rational Functionf (x) (x 1)(x 2)x 3f (3) 2/3, f 0 (3) ? ((x, 1) (1, 0)) ((x, 1) (2, 0))( x 1 )( x 2 ) ((x, 1) (3, 0))(x 3) Inserting the value x (3, 1) into f produces f (x ) f (3, 1) ((3, 1) (1, 0)) ((3, 1) (2, 0))((3, 1) (3, 0))(4, 1) (1, 1)(6, 1) (4, 5)2 13 ,(6, 1)3 18

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample of a Rational Functionf (x) (x 1)(x 2)x 3f (3) 2/3, f 0 (3) ? ((x, 1) (1, 0)) ((x, 1) (2, 0))( x 1 )( x 2 ) ((x, 1) (3, 0))(x 3) Inserting the value x (3, 1) into f produces f (x ) f (3, 1) ((3, 1) (1, 0)) ((3, 1) (2, 0))((3, 1) (3, 0))(4, 1) (1, 1)(6, 1) (4, 5)2 13 ,(6, 1)3 18

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample of a Rational Functionf (x) (x 1)(x 2)x 3f (3) 2/3, f 0 (3) ? ((x, 1) (1, 0)) ((x, 1) (2, 0))( x 1 )( x 2 ) ((x, 1) (3, 0))(x 3) Inserting the value x (3, 1) into f produces f (x ) f (3, 1) ((3, 1) (1, 0)) ((3, 1) (2, 0))((3, 1) (3, 0))(4, 1) (1, 1)(6, 1) (4, 5)2 13 ,(6, 1)3 18

Automatic DifferentiationAD SoftwaresAD tools in MATLABDerivatives of Element FunctionsChain Rule:(g u)0 (x) u 0 (x)(g 0 u)(x) g ( u ) g ((u, u 0 )) (g(u), u 0 g 0 (u)) sin u sin(u, u 0 ) (sin u, u 0 cos u) cos u cos(u, u 0 ) (cos u, u 0 sin u) 0e u e(u,u ) (eu , u 0 eu ).

Automatic DifferentiationAD SoftwaresAD tools in MATLABDerivatives of Element FunctionsChain Rule:(g u)0 (x) u 0 (x)(g 0 u)(x) g ( u ) g ((u, u 0 )) (g(u), u 0 g 0 (u)) sin u sin(u, u 0 ) (sin u, u 0 cos u) cos u cos(u, u 0 ) (cos u, u 0 sin u) 0e u e(u,u ) (eu , u 0 eu ).

Automatic DifferentiationAD SoftwaresAD tools in MATLABDerivatives of Element FunctionsChain Rule:(g u)0 (x) u 0 (x)(g 0 u)(x) g ( u ) g ((u, u 0 )) (g(u), u 0 g 0 (u)) sin u sin(u, u 0 ) (sin u, u 0 cos u) cos u cos(u, u 0 ) (cos u, u 0 sin u) 0e u e(u,u ) (eu , u 0 eu ).

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample of SinFrom ./Intlab/gradient/@gradient/sin.m

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample for Element FunctionsEvaluate the derivative at x 0.f (x) (1 x ex ) sin x f ( x ) ( 1 x e x )sin x f (0, 1) (1, 0) (0, 1) e(0,1) sin(0, 1) (1, 1) (e0 , e0 ) (sin 0, cos 0) (2, 2)(0, 1) (0, 2).

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample for Element FunctionsEvaluate the derivative at x 0.f (x) (1 x ex ) sin x f ( x ) ( 1 x e x )sin x f (0, 1) (1, 0) (0, 1) e(0,1) sin(0, 1) (1, 1) (e0 , e0 ) (sin 0, cos 0) (2, 2)(0, 1) (0, 2).

Automatic DifferentiationAD SoftwaresAD tools in MATLABExample for Element FunctionsEvaluate the derivative at x 0.f (x) (1 x ex ) sin x f ( x ) ( 1 x e x )sin x f (0, 1) (1, 0) (0, 1) e(0,1) sin(0, 1) (1, 1) (e0 , e0 ) (sin 0, cos 0) (2, 2)(0, 1) (0, 2).

Automatic DifferentiationAD SoftwaresAD tools in MATLABHigh-order Derivatives u (u, u 0 , u 00 ), u v u v u v u v······ (u v , u 0 v 0 , u 00 v 00 ) (u v , u 0 v 0 , u 00 v 00 ) (uv , uv 0 u 0 v , uv 00 2u 0 v 0 u 00 v 0 ) (u/v , u 0 (u/v )v 0 /v , (u 00 2(u/v )0 v 0 (u/v )v 00 )/v )

Automatic DifferentiationAD SoftwaresAD tools in MATLABINTLabDevelopers: Institute for Reliable Computing, HamburgUniversity of TechnologyMode: ForwardMethod: Operator overloadingLanguage: MATLABURL: : Open Source

Automatic DifferentiationAD SoftwaresAD tools in MATLABRosenbrock Functiony 1 400x1 (x12 x2 ) 2(x1 1)y 2 200(x12 x2 )

Automatic DifferentiationAD SoftwaresAD tools in MATLABOne Step of Newton Method with INTLab

Automatic DifferentiationAD SoftwaresAD tools in MATLABTOMLAB/MADDevelopers: Marcus M. Edvall and Kenneth Holmstrom,Tomlab Optimization Inc. (TOMLAB /MADintegration)Shaun A. Forth and Robert Ketzscher, CranfieldUniversity (MAD)Mode: ForwardMethod: Operator overloadingLanguage: MATLABURL: http://tomlab.biz/products/mad/Licensing: License

Automatic DifferentiationAD SoftwaresAD tools in MATLABOne Step of Newton Method with MAD

Automatic DifferentiationAD SoftwaresAD tools in MATLABADiMatDevelopers: Andre Vehreschild, Institute for ScientificComputing, RWTH Aachen UniversityMode: ForwardMethod: Source transformationOperator overloadingLanguage: MATLABURL: lLicensing: under discussion

Automatic DifferentiationAD SoftwaresAD tools in MATLABADiMat’s Examplefunction [result1, result2] f(x)% Compute the sin and square-root of x*2.% Very simple example for ADiMat website.% Andre Vehreschild, Institute for% Scientific Computing,% RWTH Aachen University, D-52056 Aachen,% Germany.% vehreschild@sc.rwth-aachen.deresult1 sin(x);result2 eschild/adimat/example1.html

Automatic DifferentiationAD SoftwaresAD tools in MATLABADiMat’s Example (cont.) addiff(@f, ’x’, ’result1,result2’);p magic(5);g p createFullGradients(p);[g r1, r1, g r2, r2] g f(g p, p);J1 [g r1{:}]; % andJ2 [g r2{:}];Source: ample1.html

Automatic DifferentiationAD SoftwaresAD tools in MATLABADiMat’s Example (cont.)function [g result1, result1, g result2, result2] g f(% Compute the sin and square-root of x*2.% Very simple example for ADiMat website.% Andre Vehreschild, Institute for Scientific Computi% RWTH Aachen University, D-52056 Aachen, Germany.% vehreschild@sc.rwth-aachen.deg result1 ((g x).* cos(x));result1 sin(x);g tmp f 00000 g x* 2;tmp f 00000 x* 2;g result2 ((g tmp f 00000)./ (2.*sqrt(tmp f 00000)));result2 sqrt(tmp f hild/adimat/example1.html

Automatic DifferentiationAD SoftwaresAD tools in MATLABMatrix CalculusDefinition: If X is p q and Y is m n, then dY: dY/dX dX:where the derivative dY/dX is a large mn pq matrix.d(X 2 ) : (XdX dXX ) :d(det(X )) d(det(X T )) det(X )(X T ) :T dX :d(ln(det(X ))) (X T ) :T dX :Ref: s.html

Automatic DifferentiationAD SoftwaresAD tools in MATLABVandermonde FunctionSource: Shaun A. Forth An Efficient Overloaded Implementation of Forward Mode Automatic Differentiation inMATLAB ACM Transactions on Mathematical Software, Vol. 32,No.2, 2006, P195-222

Automatic DifferentiationAD SoftwaresAD tools in MATLABVandermonde Function (cont.)Experiment on a PIV 3.0Ghz PC (Windows XP), Matlab Version: 6.5Source: Shaun A. Forth An Efficient Overloaded Implementation of Forward Mode Automatic Differentiation inMATLAB ACM Transactions on Mathematical Software, Vol. 32,No.2, 2006, P195-222

Automatic DifferentiationAD SoftwaresAD tools in MATLABVandermonde Function 0.00045.5350.8810.12021.611Unit of CPU time is second. Experiment on a PIII1000Hz PC (Windows 2000), Matlab Version: 7.0.1.24704 (R14)Service Pack 1, TOMLAB v5.6, INTLAB Version 5.3, ADiMat (beta) 0.4-r9.

Automatic DifferentiationAD SoftwaresAD tools in MATLABArrowhead FunctionSource: Shaun A. Forth An Efficient Overloaded Implementation of Forward Mode Automatic Differentiation inMATLAB ACM Transactions on Mathematical Software, Vol. 32,No.2, 2006, P195-222

Automatic DifferentiationAD SoftwaresAD tools in MATLABArrowhead Function (cont.)Experiment on a PIV 3.0Ghz PC (Windows XP), Matlab Version: 6.5Source: Shaun A. Forth An Efficient Overloaded Implementation of Forward Mode Automatic Differentiation inMATLAB ACM Transactions on Mathematical Software, Vol. 32,No.2, 2006, P195-222

Automatic DifferentiationAD SoftwaresAD tools in MATLABArrowhead Function nit of CPU time is second. Experiment on a PIII1000Hz PC (Windows 2000), Matlab Version: 7.0.1.24704 (R14)Service Pack 1, TOMLAB v5.6, INTLAB Version 5.3, ADiMat (beta) 0.4-r9.

Automatic DifferentiationAD SoftwaresAD tools in MATLABBDQRTIC mod

Automatic DifferentiationAD SoftwaresAD tools in MATLABBDQRTIC mod 0.9260.2600.23014.64112800

Automatic Differentiation Introductions Automatic Differentiation What is Automatic Differentiation? Algorithmic, or automatic, differentiation (AD) is concerned with the accurate and efﬁcient evaluation of derivatives for functions deﬁned by computer programs. No truncation errors are incurred, and the resulting numerical derivative

Related Documents: