March 12 2002
The often repeated statement, that given the initial conditions we know what a deterministic system will do far into the future, is false. 19th century French mathematician H. Poincaré knew it was false, and we know it is false, in the following sense: given imperceptibly different starting points, we can often end up with wildly different outcomes. Such fast separation of nearby trajectories is a signature of chaos. Governed by even the simplest laws of motion, almost any system will exhibit chaotic behaviour. A familiar example is turbulence.
Turbulence is the unsolved problem of classical physics. However, recent developments have greatly increased our insights in turbulence, and given us new concepts and modes of thought with far reaching repercussions in many different fields - hydrodynamics, semiconductors, plasmas, chemistry, biology, meteorology, economics, engineering, to name but a few. With the key discoveries made by scientists not trained to work on problems of turbulence, and the new insights affecting so many different fields, these developments are a delightful demonstration of the unity of science.
One of the discoveries was that along the route to chaos many very different physical systems go through similar, quantitatively predictable stages. But the breakthrough consisted not so much in the new predictions, as in developing new ways to think about complex systems. Since antiquity we have used regular motions (clocks, waves, circular orbits, etc.) as the starting approximations to physical phenomena, and accounted for deviations from regular motions by small computable corrections. Traditionally, we think of dynamics as smooth, with small incremental changes governed by natural laws encoded in differential equations. The theory of deterministic chaos seems to tell us that the starting approximation to strongly non-linear systems should be quite different. Portraits of chaotic systems exhibit amazingly rich self-similar structure which is not at all apparent in their formulation in terms of differential equations. To put it more succinctly: Junk your old equations and look for guidance in clouds' repeating patterns.
The theory proposed here is inspired by the way we perceive turbulence. Turbulent systems never settle down, and still you and I can identify a snapshot as a “cloud”, and a student can tell what the dials in her turbulence experiment were set to after a glance at the digitized image of its output. How do we do it?
An answer might have been offered by 20th century German mathematician E. Hopf (might be an urban legend, as we never found a paer in which he says what follows). In this vision turbulence explores a repertoire of distinguishable patterns; as we watch a turbulent system evolve, every so often we catch a glimpse of a familiar whorl:
For any finite spatial resolution (an image represented by a finite number of pixels), at any given instant the system approximately tracks a pattern belonging to a finite alphabet of admissible patterns, and the dynamics can be thought of as a walk through the space of such patterns.
Carrying out this program in a systematic manner is the grand challenge of dynamical systems theory - how to deal with dynamics of very many degrees of freedom? There are two issues here:
On the face of it, the situation seems utterly hopeless; the butterfly effect” implies that the smallest error, no matter how small, will in finite time overtake the whole calculation, and no amount of computation can beat the finiteness of the predictabilty horizon (5-10 days for weather prediction) that chaos implies.
The issue of “what” here comes to our rescue, with the wonderfully counterintuitive surprise: the theory says that the more unstable the patterns are, the more accurate will be the predictions of the theory based on a small number of the shortest recurrent patterns! But the short spatio-temporally periodic patterns have not had sufficient time to be rendered meaningless by the “butterfly effect”, and they are the only ones that can be accurately (if laboriously) computed.
What does all this have to do with subatomic structures?
Formulated in 1946-49 and tested through 1970's, quantum electrodynamics (QED) takes free electrons and photons as its point of departure. QED is a wildly successful theory, with its value for the electron magnetic moment agreeing with the experiment to 12 significant digits. Quantum chromodynamics (QCD) seemed the natural next step, the only new feature being the nonlinear gluon-gluon interactions. However, this theory has failed us utterly: thinking in terms of isolated quarks and gluons does not make sense. Strongly nonlinear field theories require radically different approaches.
I propose to re-examine the role that classical solutions play in the quantization of strongly nonlinear fields, and that brings us back to Hopf's vision.
The search for the classical solutions of QCD and gravity has so far been neither very successful nor very systematic. If the classical behavior of these theories is anything like that of turbulent motions which we see in fluids, we expect very many solutions, with very few of the important ones available in analytical form; the strongly nonlinear classical field theories are turbulent, after all. Furthermore, there is not a dimmest hope that such solutions are either beautiful or analytic, and there has not been much enthusiasm for grinding them out, as long as one lacked ideas as what to do with numerical solutions.
So far, Hopf's vision has been checked only on a very simple physical system, using equations that describe the flutter of a flame you might see in the gas burning on your kitchen stove. As one varies the burning rate, the flame can become very unstable and turbulent. The published investigations of the flame flutter are but a proof of principle, a first step in the direction of implementing Hopf's program. Some 1,000 recurrent patterns have been determined numerically for various variants of the flame front system. Qualitatively, these solutions demonstrate that the recurrent patterns program can be implemented, but how is this information to be used quantitatively?
An example of a recurrent pattern: u0 is the height of the flame at position x at time t.
The periodic orbit theory answers such questions by assembling individual patterns into accurate predictions for (let us say) the dispersion of light by turbulent air. The key idea of the periodic orbit theory is to compute this measurable averages by means of a formula which re-expresses the average as a sum over all the possible patterns grouped hierarchically, by the likelihood of pattern's occurrence.
But there is a big conceptual gap to bridge between what has been achieved, and what needs to be done: Even the flame flutter has been probed only in its weakest turbulence regime, and it is an open question to what extent Hopf's vision remains viable as the system grows large and more turbulent.
Even though I have illustrated it by turbulence, the theory of recurrent patterns that I propose to develop would by no means be restricted to motions of fluids. The key concepts should be applicable to many systems extended in space, from quantum fields governing subatomic phenomena to assemblies of neurons. If a success of this theory of spatially extended systems is forthcoming, the impact would be very broad. Not only would we be in a position to predict and control turbulence in fluids, but we would also have a framework within which to develop the quantum field theory of classically turbulent systems, attempt to prove that quark confinement in an effect of Yang-Mills turbulence, ponder the application of these methods to problems such as the analysis of dynamical 3-dimensional brain measurements, and finally, understand why the clouds are the way they are.