Newton's Method I

1y ago
4 Views
1 Downloads
929.58 KB
22 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Evelyn Loftin
Transcription

Newton’s Method IRoots and Stationary Points of Univariate Scalar FunctionsUwe NaumannInformatik 12:Software and Tools for Computational EngineeringRWTH Aachen University

OutlineObjective and Learning OutcomesRecallLinearizationIntermediate Value TheoremNewton’s MethodRoots of Nonlinear EquationsStationary Points and Local OptimaSummary and Next Steps,Linearization and Newton’s Method I, info@stce.rwth-aachen.de2

OutlineObjective and Learning OutcomesRecallLinearizationIntermediate Value TheoremNewton’s MethodRoots of Nonlinear EquationsStationary Points and Local OptimaSummary and Next Steps,Linearization and Newton’s Method I, info@stce.rwth-aachen.de3

Newton’s Method IObjective and Learning OutcomesObjectiveI Introduction to Newton’s method for scalar functions.Learning OutcomesI You will understandIIIIroots of nonlinear functionslinearizationconvergencestationary points and local optimaI You will be able toI implement Newton’s methodI investigate convergence of Newton’s method.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de4

OutlineObjective and Learning OutcomesRecallLinearizationIntermediate Value TheoremNewton’s MethodRoots of Nonlinear EquationsStationary Points and Local OptimaSummary and Next Steps,Linearization and Newton’s Method I, info@stce.rwth-aachen.de5

RecallLinearizationThe solution of linear equations amounts to simple scalar division. The solutionof nonlinear equations can be challenging.Many numerical methods for nonlinear problemsare built on local (at x̃) replacement of the target function with a linear (affine; in x) approximation derived from the truncated Taylor seriesexpansion and “hoping” thatf (x̃ x) f (x̃) f 0 (x̃) · x ,i.e, hoping for a reasonably small remainder.“Egg-Laying, Wool-Bearing, Milk-Giving Sow”c Georg Mittenecker @ WikipediaThe solution of a sequence of linear problems is then expected to yield aniterative approximation of the solution to the nonlinear problem. Newton’smethod is THE example.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de6

RecallIntermediate Value Theorem and BisectionLet y f (x) be continuous within a neighborhood x x of x taking valuesf (x) and f (x x) at the endpoints of the interval. Then f takes all valuesbetween f (x) and f (x x) over the same interval.354*(x-1)**30If f (x) and f (x x) have different signs,then f has a root within the interval boundedby x and x x, i.e, x̃ : f (x̃) 0.30252015The simplest possible root finding algorithmfollows from iterative / recursive bisection ofthe the interval [x, x x].1050-500.511.522.53Unfortunately, the bisection algorithm converges slowly and does not generalizeto higher dimensions. Hence, we are going to investigate superior alternatives.Note: x̃ not necessarily unique; f can take values outside of [f (x), f (x x)],Linearization and Newton’s Method I, info@stce.rwth-aachen.de7

OutlineObjective and Learning OutcomesRecallLinearizationIntermediate Value TheoremNewton’s MethodRoots of Nonlinear EquationsStationary Points and Local OptimaSummary and Next Steps,Linearization and Newton’s Method I, info@stce.rwth-aachen.de8

Newton’s MethodRoots of Nonlinear EquationsConsider a nonlinear equation y f (x) 0 at some (starting) point x.Building on the assumption that f (x x) f (x) f 0 (x) · x the rootfinding problem for f can be replaced locally by the root finding problem for thelinearizationf ( x) f (x) f 0 (x) · x .The right-hand side is a straight line intersecting the y -axis in( x 0, f ( x) f (x)).Solution off ( x) f (x) f 0 (x) · x 0for x yields x f (x)f 0 (x)implying f (x x) 0.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de9

Newton’s MethodIterative AlgorithmIf the new iterate is not close enough to the root of the nonlinear function, i.e, f (x x) for some measure of accuracy of the numerical approximation 0, then it becomes the starting point for the next iteration yielding therecurrencef (x)x x 0f (x)Convergence of this Newton[-Raphson] method is not guaranteed in general.Damping of the magnitude of the next step may help.x x α·f (x)f 0 (x)for 0 α 1 .The damping parameter α is often determined by line search (e.g, recursivebisection yielding α 1, 0.5, 0.25, . . .) such that decrease in absolutefunction value is ensured.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de10

Newton’s MethodImplementation12template typename T T f(const T &x);template typename T T dfdx(const T &x);34567template typename T void solve(T& x, const T& eps) {while (fabs(f(x)) eps) x f(x)/dfdx(x);}8910111213141516int main(int, char v[]) {double x std::stof(v[1]);solve(x,1e 12);std::cout ”x ” x std::endl ”f(x) ” f(x) std::endl ”dfdx(x) ” dfdx(x) std::endlreturn 0;},Linearization and Newton’s Method I, info@stce.rwth-aachen.de11

Newton’s MethodConvergence of Fixed-Point IterationNewton’s method can be regarded as a fixed point iterationx g (x) x f (x).f 0 (x)If at the solution g 0 (x) 1 ,then there exists a neighborhood containing values of x for which thefixed-point iteration converges to this solution.The convergence rate of a fixed-point iteration grows linearly with decreasingvalues of g 0 (x) .For g 0 (x) 0 we get at least quadratic convergence; cubic for g 0 (x) g 00 (x) 0 and so forth.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de12

Newton’s MethodFormulation as Fixed-Point IterationNewton’s method becomesx g (x) x yieldingg 0 (x) f (x) ·f (x)f 0 (x)f 00 (x).(f 0 (x))2At the solution f (x) 0 implies g 0 (x) 0. Assuming a simple root (f (x) 0,f 0 (x) 6 0) the second derivative of g becomes equal tog 00 (x) f 0 (x) ·f 00 (x) f (x) · (. . .)(f 0 (x))2 0implying quadratic convergence within the corresponding neighborhood of thesolution if f 00 (x) 6 0 as well as convergence after a single iteration for linear f .,Linearization and Newton’s Method I, info@stce.rwth-aachen.de13

Newton’s MethodConvergence (Example)Let f (x) cos(x). fromx g (x) x followsg 0 (x) f (x) ·f (x)cos(x) x f 0 (x) sin(x)f 00 (x) 0 sin(x)g (x) cos(x) ·.02(f (x))(cos(x))2At x̃ 1 we get g 0 (x) 0.41 and hence convergence to the nearest root atπ2 1.57.At x̃ 3 we get g 0 (x) 49.21 suggesting divergence. However, the seconditerate x̃ 4.02 yields g 0 (x) 0.69 resulting in convergence to the rootclosest to 4.02, that is, 3·π2 4.71.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de14

Stationary Points and Local OptimaConditionsLet y f (x) be twice continuously differentiable over the domain of interest.I x̃ is a stationary point (necessary condition for local optimality) of f iff 0 (x̃) df(x̃) 0dxI x̃ is a local minimum (sufficient condition for local optimality) of f iff 00 (x̃) d 2f(x̃) 0dx 2(strict convexity)I x̃ is a local maximum (sufficient condition for local optimality) of f iff 00 (x̃) d 2f(x̃) 0dx 2(strict concavity)f 00 0 indicates a non-simple stationary point.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de15

Stationary PointsSteepest DescentConsider the nonlinear optimization problemmin f (x)x IRfor f : IR IR .Starting from some initial estimate for the stationary point x̃ the steepestdescent method iteratively takes steps into descent directions. In the scalarcase the choice is between stepping toward or .The first derivative f 0 (x) indicates local increase (f 0 (x̃) 0) or decrease(f 0 (x̃) 0) of the the function value.Aiming for decrease the next step should be toward if f 0 0 or toward if f 0 0. No further local decrease in the function value can be achieved forf 0 0 (necessary optimality condition).,Linearization and Newton’s Method I, info@stce.rwth-aachen.de16

Steepest DescentAlgorithmThe step size is typically damped in order to ensure continued progress towardminx IR f (x) yielding the recurrencex : x α · f 0 (x)while f 0 (x) Comments on line search apply as above.Validation of a local minimum at x̃ requires f 00 (x̃) 0. Similarly, a localmaximum is found if f 00 (x̃) 0.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de17

Steepest DescentImplementation123template typename T T f(const T &x);template typename T T dfdx(const T &x);template typename T T ddfdxx(const T &x);4567891011121314151617template typename T void solve(T& x, const T& eps) {T g dfdx(x), y f(x), y prev;while (fabs(g) eps) {y prev y;double alpha 2.;while (y prev y&&alpha eps) {T x trial x; alpha/ 2;x trial alpha g; y f(x trial);}x alpha g; g dfdx(x);}},Linearization and Newton’s Method I, info@stce.rwth-aachen.de18

Newton’s MethodLocal OptimaConsider the nonlinear optimization problemmin f (x)x IRfor f : IR IR .Application of Newton’s method tof 0 (x) 0yieldsx x f 0 (x).f 00 (x)Comments on convergence and line search apply as above.Validation of a local minimum at x̃ requires f 00 (x̃) 0. Similarly, a localmaximum is found if f 00 (x̃) 0.,Linearization and Newton’s Method I, info@stce.rwth-aachen.de19

Newton’s MethodLocal Optima: Implementation123template typename T T f(const T &x);template typename T T dfdx(const T &x);template typename T T ddfdxx(const T &x);456789template typename T void solve(T& x, const T& eps) {while (fabs(dfdx(x)) eps)x dfdx(x)/ddfdxx(x);},Linearization and Newton’s Method I, info@stce.rwth-aachen.de20

OutlineObjective and Learning OutcomesRecallLinearizationIntermediate Value TheoremNewton’s MethodRoots of Nonlinear EquationsStationary Points and Local OptimaSummary and Next Steps,Linearization and Newton’s Method I, info@stce.rwth-aachen.de21

Newton’s Method ISummary and Next StepsSummaryI Newton’s method for scalar functionsI roots of nonlinear equationsI stationary points and local optimaNext StepsI Inspect sample code.I Run further experiments.I Continue the course to find out more .,Linearization and Newton’s Method I, info@stce.rwth-aachen.de22

Newton's method can be regarded as a xed point iteration x g(x) x f(x) f0(x): If at the solution jg0(x)j 1 ; then there exists a neighborhood containing values of x for which the xed-point iteration converges to this solution. The convergence rate of a xed-point iteration grows linearly with decreasing

Related Documents:

Texts of Wow Rosh Hashana II 5780 - Congregation Shearith Israel, Atlanta Georgia Wow ׳ג ׳א:׳א תישארב (א) ׃ץרֶָֽאָּהָּ תאֵֵ֥וְּ םִימִַׁ֖שַָּה תאֵֵ֥ םיקִִ֑לֹאֱ ארָָּ֣ Îָּ תישִִׁ֖ארֵ Îְּ(ב) חַורְָּ֣ו ם

Newton's method, applied to a polynomial equation, allows us to approximate its roots through iteration. Newton's method is e ective for finding roots of polynomials because the roots happen to be fixed points of Newton's method, so when a root is passed through Newton's method, it will still return the exact same value.

Extensions I 80% of the minimization and solving methods you meet will be derivative-based twists of Newton's Method I Halley's Method, Householder Methods: Higher-order derivatives I Quasi-Newton Methods: If you can't take J, approximate it I Secant Method: Newton's Method with nite di erences I Gauss-Newton: Minimize sum of squared functions I Lecture 2 talks about alternatives

Derivation Example Convergence Final Remarks Outline 1 Newton's Method: Derivation 2 Example using Newton's Method & Fixed-Point Iteration 3 Convergence using Newton's Method 4 Final Remarks on Practical Application Numerical Analysis (Chapter 2) Newton's Method R L Burden & J D Faires 2 / 33

6. Newton-Raphson Method In numerical analysis, Newton-Raphson (James, 2008) method also known as Newton's methods is one of the well-known approximation methods in solving non-linear equations. Newton's method is well-known for its fast converge speed; especially when the initial guess is sufficiently closed to the root.

is called Newton's method or NewtonRaphson's method. - Newton's method is a method of giving the initial value x 0, calculating xx 12,, one after another, and to determine for a root. The quadratic convergence and the linearly convergence of the Newton's method are known as followings. Let α be a simple root for (x) f 0, i.e., f .

Discussion of Newton's Method II Newton's method solves linear system at every iteration. Can be computationally expensive, if n is large. Remedy: Apply iterative solvers, e.g. conjugate-gradients. Newton's method needs rst and second derivatives. Finite di erences are computationally expensive. Use automatic di erentiation (AD) for gradient

Galilean Relativity Newton's profound perspective Newton's Laws of Motion 3 ways Newton's Law of Gravitation 9/8/10 2 Newton's profound perspective Newton formulated a universal theory of motion and gravity Same laws of physics operate anywhere and anytime in the Universe