The refinement procedure of Grebogi et al. ([28], hereinafter referred to as GHYS) is analagous to Newton's method for finding a zero of a function. GHYS presented their algorithm for the two dimensional case. We will also be looking closely at a generalization by Quinlan and Tremaine ([62, 63], with [63] hereinafter referred to as QT) to arbitrary-dimensional Hamiltonian systems.
The basic idea is as follows.
Let be a trajectory with S steps
that has noise
,
where
is the machine precision,
and
is some constant significantly
greater than 1 that allows room for improvement
towards the machine precision.
Let
be the 1-step error at step i+1,
where
for all i.
The set of all 1-step errors is represented by
,
and is estimated by a numerical integration technique
that has higher accuracy than that used to compute
.
This describes a function, which we call g,
which takes as input the entire orbit
and whose output is the set of 1-step errors
, i.e.,
.
Since the 1-step errors are assumed to be small,
is small. That is,
is close to a zero of g, if one
exists. A zero of the function would represent an orbit in which the
1-step errors are identically zero, i.e., a true orbit.
This is an ideal situation in which to run Newton's method.
If the method converges, then a true orbit lies nearby.
Assume we have a noisy D-dimensional orbit
, and it has a shadow
.
Then
where is an approximation to f with a typical accuracy or
noise of
. Here,
should be the accuracy typically used by
practitioners of the problem being studied.
Now suppose we compute the 1-step errors
although numerically we use , a better approximation to f than
. Then
represents a correction term that perturbs point
towards the shadow.
So,
In the spirit of Newton's method, we ignore
the in (7), and so
one refinement iteration defines the corrections along the entire orbit:
where is
the linearized perturbation map. For a discrete map,
is just the Jacobian of the map at step i.
For a system of ODEs,
is the Jacobian of the integral of the ODE from step i to
step i+1.
The computation of the linear maps
,
called resolvents for an ODE system,
takes most of the CPU time during a refinement. If we let D be
the dimension of the problem, then
the resolvent has
terms in it, and it takes
time to compute.
Hayes and Jackson [33, 34]
list several optimizations to the
procedure that speed the process considerably, and should be considered
if one is interested in using the algorithm in higher dimensions.
Presumably, if one is interested in
studying simpler high-dimensional systems, a chaotic map would be a
better choice than an ODE system, because no variational equation
integration is needed.
For simplicity of explanation, we assume D=2 for the remainder of this
subsection, deferring discussion of the higher D case to the next subsection.
If the problem were not chaotic, the correction
terms could be computed directly from (8). But
since
will amplify any errors in
not lying in the
stable direction, computing the
's by iterating
(8) forward will amplify errors and typically
produce nothing but noise. Therefore, we split
the error and correction terms into components in the stable (
)
and unstable (
) directions at each timestep:
Since it is not known a priori which direction is
unstable at each timestep, the unstable vector at time
is
initialized to an
arbitrary unit vector. The linearized map is then iterated forward with
Since magnifies any component that lies in the unstable direction,
and assuming we are not so unlucky to choose a
that lies
too close to the stable direction, then
after a few e-folding times
will point roughly in
the unstable direction at
.
Similarly, the stable unit direction vectors
are computed by initializing
to an arbitrary unit vector
and iterating backward,
Substituting (10) into (8) yields
While magnifies errors in the unstable direction,
it damps them in the stable direction. Likewise,
damps errors in the unstable direction and magnifies errors in the
stable direction. Thus the
terms should be computed
backward, and the
terms forward.
Taking components of (13)
in the unstable direction at step i+1, iterate backward on
and taking components in the stable direction, iterate forward on
The initial choices for and
are arbitrary as long
as they are small -- smaller than the maximum shadowing distance --
because (15) damps initial conditions and
(14) damps final conditions. GHYS and QT
choose them both as 0.
This choice is probably as good as any,
but it can be seen here that if one shadow exists, there are infinitely
many of them.
Another way of looking at these initial choices
for
and
is that they ``pinch'' the growing
components at the end point, and the backward-growing components
at the initial point, to be small.
That is, boundary conditions are being forced
on the problem so that the exponential
divergence is forcibly masked, if possible,
making the solution of (8) numerically stable.
Note that, counter-intuitively, these boundary conditions demand that the initial condition for the shadow and noisy orbits differ along the unstable direction. In fact, this must occur if the change in initial conditions is to have any effect. That is, when looking for a shadow, if perturbations were only allowed in the stable direction, those perturbations would die out, leading the ``shadow'' to follow the true orbit that passes through our initial conditions -- the one that is already known to diverge exponentially from the computed orbit.