Project 2: Largest Lyapunov Exponents

3y ago
30 Views
3 Downloads
632.66 KB
13 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Mariam Herr
Transcription

Project 2: Largest Lyapunov ExponentsEric LaForestMarch 9, 2010Contents1Model32Analysis Methods42.1Wolf’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42.2Rosenstein’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43Results53.1Raw Output: Wolf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53.2Raw Output: Rosenstein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53.3Relationship of LLEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111

List of Figures1Wolf’s Algorithm: Raw Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62Wolf’s Algorithm: Raw Output (Zoomed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73Rosenstein’s Algorithm: Raw Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84Rosenstein’s Algorithm: Raw Output (Zoomed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95Rosenstein’s Algorithm: Raw Output (Zoomed Further) . . . . . . . . . . . . . . . . . . . . . . . . .106Wolf’s Algorithm: Relationship of LLEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127Wolf’s Algorithm: Relationship of LLEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13References[1] ROSENSTEIN , M. T., C OLLINS , J. J., AND D E L UCA , C. J. A practical method for calculating largest lyapunovexponents from small data sets. Phys. D 65, 1-2 (1993), 117–134.[2] S PROTT, J. C. Chaos and Time-Series Analysis. Oxford University Press, 2003.[3] W OLF, A., S WIFT, J., S WINNEY, H., AND VASTANO , J. Determining lyapunov exponents from a time series.Physica D: Nonlinear Phenomena 16, 3 (July 1985), 285–317.2

1 ModelThe system for this simulation is the FitzHugh-Nagumo (FHN) model:V̇ V V3 W X3Ẇ 0.08(V 0.7 0.8W )driven by the output X(t) of a Forced Negative Resistance Oscillator (FNRO):Ẋ YẎ 0.2(1 X 2 )Y X 3 F (t)itself driven by a base oscillator stimulus F (t):F (t) A cos(ωt)The parameters of this experiment are the amplitude A and frequency ω of the base oscillator, varied over a range thatwas empirically determined to give some interesting variations in the final results:A {Ai 16 Ai 18}ω {ωj 2 ωj 5}This simulation investigates the chaotic behaviour of this system over this range by calculating the Largest LyapunovExponent (LLE) for both V (t) and X(t). As the system formulas are available, Wolf’s algorithm [3] can be usedto determine the LLEs. As a cross-check, a time-series is generated and analyzed for LLEs using Rosenstein’s [1]algorithm. These algorithms, as well as additional clarifications, are also described in a more accessible manner inSprott [2, Ch. 5.6 and 10.4].3

2 Analysis MethodsSince I was writing the analysis code from scratch, in order to provide a cross-check against programming errors andparameter maladjustment, I implemented two different algorithms to find the LLE: Wolf’s and Rosenstein’s.2.1 Wolf’s AlgorithmWolf’s algorithm is straightforward and uses the formulas defining the system. It calculates two trajectories in thesystem, each initially separated by a very small interval R0 . The first trajectory is taken as a reference, or ’fiducial’trajectory, while the second is considered ’perturbed’. Both are iterated together until their separation abs(R1 R0 )R11is large enough, at which point an estimate of the LLE can be calculated as λ1 tlog2 abs( R). The perturbed0trajectory is then moved back to a separation of sign(R1 )R0 towards the fiducial, and the process repeated. Over time,a running average of λ1 will converge towards the actual LLE.In this analysis, the separation was deemed sufficient at 3R0 since log2 (3) 1, meaning at least one bit of informationis gained about λ1 . Given double-precision numbers, Sprott recommends R0 10 10 as sufficiently small yet muchlarger than the minimum precision. The algorithm is iterated until the convergence error is less than 0.01. Finerprecision was possible, but took an impractical amount of time to compute.2.2 Rosenstein’s AlgorithmRosenstein’s algorithm works on recorded time-series, where the system formulas may not be available. It begins byreconstructing an approximation of the system dynamics by embedding the time-series in a phase space where eachpoint is a vector of the previous m points in time (its ’embedding dimension’), each separated by a lag of j time units.Although Taken’s theorem states that an embedding dimension of 2D 1 is required to guarantee to capture all thedynamics of a system of order D, it is often sufficient in practise to use m D. Similarly, although an effective timelag must be determined experimentally, in most cases j 1 will suffice.Given this embedding of the time-series, for each point I find its nearest neighbour (in the Euclidean sense) whosetemporal distance is greater than the mean period of the system, corresponding to the next approximate cycle in thesystem’s attractor. This constraint positions the neighbours as a pair of slightly separated initial conditions for differenttrajectories. The mean period was calculated as the reciprocal of the mean frequency of the power spectrum of thetime-series, calculated in the usual manner using the FFT.I can now perform a process similar to Wolf’s algorithm to approximate the LLE: for each point and its nearestneighbour I calculate the logarithm of their separation, and then average the estimates together. This process is thenrepeated one step forward in time for each pair of neighbours, giving another average estimate. The fact that theseestimates are repeatedly averaged over multiple trajectories spread over the entire time-series allows for fast andaccurate results, even in the presence of noise (which is absent in this generated time-series) and a paucity of datapoints.These estimates over time can be then fit to a line using least-squares, whose slope is the calculated LLE. Only thefirst few points are useful since as the distance in time increases, it is likelier that neighbours will begin re-converging,and thus the slope falls towards zero. In this analysis, the first 5 points were found to give a meaningful fit. If aleast-squares fit could not be found, then the slope was assumed to be zero.4

3 ResultsIn order to check the output of each algorithm, I first display them alongside each other, along with the variations inthe system parameters.3.1 Raw Output: WolfFigures 1 and 2 show the LLEs, calculated by Wolf’s algorithm, of the FitzHugh-Nagumo (FHN) model (V (t)) andthe Forced Negative Resistance Oscillator (FNRO) (X(t)) over the combined ranges of amplitude (A) and frequency(ω) of the base oscillator, both varied in intervals of 0.1.Finer resolution was not practical as the calculation of each LLE took approximately 28 seconds, for a total of almost5 hours for 600 data points. The cause was due to the poor convergence behaviour of the algorithm, and that myimplementation attempted to calculate the LLE of all four values in the system (V (t), W (t), X(t), Y (t)), which endedup being unnecessary effort.Overall, the change in amplitude has no effect on the LLEs of the system as it is always high enough to cause theFNRO to dominate the FHN system. A more interesting choice of amplitude might have been closer to 1, wherethe effect of stimulus to the FHN system begins to cause action potentials, and thus possibly some additional chaoticbehaviour.Conversely, the effect of varying the frequency was significant and repeatable. The effect is made more visible inFigure 2.3.2 Raw Output: RosensteinFigured 3, 4, and 5 show the same juxtaposed parameter variations and LLEs as before, but calculated using Rosenstein’s algorithm applied to a pre-generated sequence of 1000 data points over a time span of 0 to 100 in steps of0.1.Given the much greater efficiency of this algorithm, each LLE took only about 2 seconds to compute, and thus a finerresolution of 0.01 was possible in the variation of the parameters. This precision also allowed for a more detailed lookin Figure 5.As hoped, the output of Rosenstein’s algorithm agrees with that of Wolf’s algorithm, although with much more precision and speed. Both show the same general rise and fall in the LLE as the frequency varies, and neither are affectedby the change in amplitude.5

Evolution of Largest Lyapunov Exponents Over Varying A and ωAmplitude cy (ω)5432LLE of V(t)0.60.40.20LLE of X(t)0.60.40.20Figure 1: Wolf’s Algorithm: Raw Output6

Evolution of Largest Lyapunov Exponents Over Varying A and ωAmplitude 00310Frequency (ω)543LLE of V(t)20.40.30.20.10LLE of X(t)0.60.40.20Figure 2: Wolf’s Algorithm: Raw Output (Zoomed)7

Evolution of Largest Lyapunov Exponents Over Varying A and ωAmplitude 03000400050006000Frequency (ω)543LLE of V(t)20.40.30.20.10LLE of X(t)0.40.30.20.10Figure 3: Rosenstein’s Algorithm: Raw Output8

Evolution of Largest Lyapunov Exponents Over Varying A and ωAmplitude 03300340035003600Frequency (ω)543LLE of V(t)230000.40.30.20.103000LLE of X(t)0.40.30.20.103000Figure 4: Rosenstein’s Algorithm: Raw Output (Zoomed)9

Evolution of Largest Lyapunov Exponents Over Varying A and ωAmplitude 03150320032503300Frequency (ω)543LLE of V(t)230000.40.30.20.103000LLE of X(t)0.40.30.20.103000Figure 5: Rosenstein’s Algorithm: Raw Output (Zoomed Further)10

3.3 Relationship of LLEsTo further verify the relationship suggested by the previous figures between the LLEs of the FNRO and FHN modelsas the parameters change, Figures 6 and 7 show a scatter plot of the LLEs of the FHN model (V (t)) as a function ofthe LLEs of the driving FNRO (X(t)), with the base oscillator frequency (ω) as the colour of each point. The changein amplitude had no effect and is ignored.Even with the coarse resolution afforded by Wolf’s algorithm, is it clear that the relationship between the LLEs isproportional, if not exact. One can see that the LLEs are lower with a lower frequency (blue and cyan), jump highersomewhere between a frequency of 3 and 4 (yellow and orange), and then decrease along a shallower slope as thefrequency increases from 4 to 5 (red).These relationships are much more clear when using Rosenstein’s algorithm, in Figure 7. The much greater precisionand quantity of results clearly show the progression of the LLEs as the frequency increases: first in the middle (blue)and decreasing together until some point above 3 (cyan), then increasing again with a very sudden jump between 3and 4 (green). Afterwards, the LLEs of the FNRO remain relatively steady while those of the FHN jump up (yellow).They then both decrease together along a shallower line (orange and red). These relationships match those first hintedat in Figures 3, 4, and 5.11

Relationship of Largest Lyapunov Exponents(color coded by ω)50.54.50.44LLE of V(t)0.33.50.230.12.50 0.100.10.20.3LLE of X(t)0.4Figure 6: Wolf’s Algorithm: Relationship of LLEs120.50.62

Relationship of Largest Lyapunov Exponents(color coded by ω)50.50.454.50.44LLE of 2LLE of X(t)0.250.3Figure 7: Wolf’s Algorithm: Relationship of LLEs130.350.42

This simulation investigates the chaotic behaviour of this system over this range by calculating the Largest Lyapunov Exponent (LLE) for both V (t) and X(t). As the system formulas are available, Wolf’s algorithm [3] can be used to determine the LLEs. As a cross-check, a time-series is generated and analyzed for LLEs using Rosenstein’s [1 .

Related Documents:

The Matlab program prints and plots the Lyapunov exponents as function of time. Also, the programs to obtain Lyapunov exponents as function of the bifur-cation parameter and as function of the fractional order are described. The Matlab program for Lyapunov exponents is developed from an existing Matlab program for Lyapunov exponents of integer .

largest nonzero Lyapunov exponent λm among the n Lyapunov exponents of the n-dimensional dynamical system. A.2.1 Computation of Lyapunov Exponents To compute the n-Lyapunov exponents of the n-dimensional dynamical system (A.1), a reference trajectory is created by integrating the nonlinear equations of motion (A.1).

Lyapunov exponents may provide a more useful characterization of chaotic systems. For time series produced by dynamical systems, the presence of a positive characteristic exponent indicates chaos. Furthermore, in many applications it is sufficient to calculate only the largest Lyapunov exponent (λ1).

Super Teacher Worksheets - www.superteacherworksheets.com Exponents Exponents Exponents Exponents 1. 3. 4. 2. Write the expression as an exponent. 9 x 9 x 9 x 9 2 3 63 44 32 Compare. Use , , or . Write the exponent in standard form. Write the exponent as a repeated multiplication fac

The Lyapunov theory of dynamical systems is the most useful general theory for studying the stability of nonlinear systems. It includes two methods, Lyapunov’s indirect method and Lyapunov’s direct method. Lyapunov’s indirect method states that the dynamical system x f(x), (1)

Exponents and Scientific Notation * OpenStax OpenStax Algebra and Trigonometry This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 Abstract In this section students will: Use the product rule of exponents. Use the quotient rule of exponents. Use the power rule of exponents.

Extend the properties of exponents to rational exponents. NVACS HSN.RN.A.1 Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define 5 1/3

Lesson 5: Negative Exponents and the Laws of Exponents Student Outcomes Students know the definition of a number raised to a negative exponent. Students simplify and write equivalent expressions that contain negative exponents. Lesson Notes We are now ready to extend the existing la