Gauge invariance is the bane of my life from Predrag Cvitanovic on Vimeo
[If you want to read the blackboard, click on [HD], lower right corner.]

The story:

One day, terror struck; early spring of 1975 I was invited to Caltech to give a talk. I could go to any other place and say that Kinoshita and I have computed thousands of diagrams and that the answer is, well, the answer is:

\begin{displaymath}
+ (0.92 \pm 0.02) \left({\alpha \over \pi}\right)^3.
\end{displaymath}

But in front of Feynman? He is going to ask me why “+” and not “-”? Why do 100 diagrams yield a number of order of unity, and not 10 or 100 or any other number? It might be the most precise agreement between fundamental theory and experiment in all of physics - but what does it mean?

Now, you probably do not know how stupid the quantum field theory is in practice. What is done (or at least was done, before the field theorists left this planet for pastures beyond the Planck length) is:

1)   start with something eminently sensible (electron magnetic moment; positronium)
2)   expand this into combinatorially many Feynman diagrams, each an integral in many dimensions with integrand with thousands of terms, each integral UV divergent, IR divergent, and meaningless, as its value depends on the choice of gauge
3)   integrate by Monte Carlo methods in 10-20 dimensions this integral with dreadfully oscillatory integrand, and with no hint of what the answer should be; in our case $\pm 10$  to $\pm 100$  was a typical range
4)   add up hundreds of such apparently random contributions and get

\begin{displaymath}
+ (0.92 \pm 0.02) \left({\alpha \over \pi}\right)^3.
\end{displaymath}

So, in fear of God I went into deep trance and after a month came up with this: if gauge invariance of QED guarantees that all UV and IR divergences cancel, why not also the finite parts?

And indeed; when the diagrams that we had computed were grouped into gauge invariant subsets, a rather surprising thing happens; while the finite part of each Feynman diagram is of order of 10 to 100, every subset adds up to approximately

\begin{displaymath}
\pm {1 \over 2} \left({\alpha \over \pi}\right)^n.
\end{displaymath}

The n=1 term is the Schwinger correction. If you take this numerical observation seriously, the “zeroth” order approximation to the electron magnetic moment is given by
\begin{displaymath}
{1 \over 2} (g-2) = {1 \over 2} {\alpha \over \pi}
{ 1 \ov...
...ha \over \pi}\right)^2
\right)^2
} + \mbox{\lq\lq corrections''}.
\end{displaymath}

Now, this is a great heresy - my colleagues will tell you that Dyson has shown that the perturbation expansion is an asymptotic series, in the sense that the n-th order contribution should be exploding combinatorially
\begin{displaymath}
{1 \over 2} (g-2) \approx
\cdots + n^n \left({\alpha \over \pi}\right)^n + \cdots
\,,
\end{displaymath}

and not growing slowly like my conjecture
\begin{displaymath}
{1 \over 2} (g-2) \approx
\cdots + n \left({\alpha \over \pi}\right)^n + \cdots
\,.
\end{displaymath}

But do not take them too seriously - very few of them have carried through gauge theory calculations. It would be incredible stroke of luck if my guess were anywhere close to the true asymptotics, but any growth rate slower than combinatorial would suffice for a convergent theory. For me, the above is the most intriguing hint that something deeper than what we know underlies quantum field theory, and the most suggestive lesson of our calculation.

I prepared the talk for Feynman, but was fated to arrive from SLAC to Caltech precisely five days after the discovery of the $J/\psi$ particle. I had to give an impromptu irrelevant talk about what would the total $e^{+}e^{-}$ cross-section had looked like if $J/\psi$ were a heavy vector boson, and had only 5 minutes for my conjecture about the finiteness of gauge theories. Feynman liked it and gave me some sage advice.

Toichiro Kinoshita writes (Nov 1995):

The eighth order contribution to the electron g - 2 from diagrams without any fermion loop is -1.9344(1370) as of 1990 (in my book on QED). So, unfortunately, it appears that it does not conform to your pet theory.

I remain hopeful (Nov 2013):

Aoyama, Hayakawa, Kinoshita, and Nio (Jul 2012) heroic calculation has progressed to the tenth order. They use the self-energy method for evaluating (g-2), described in the Cvitanović and Kinoshita tryptich of sixth-order papers, so they are not able to separate out the minimal gauge sets that I use to argue that the mass-shell QED is convergent. They report for the “quenched” set (no lepton loops, only virtual photons):

A(8)1,V ≈ -2 , while my guess based on 6 gauge sets would be A(8)1,V = 0, which is pretty darn close, considering this is a sum of 518 diagrams and a zillion counterterms.

A(10)1,V ≈ 10, while I predict, based on 9 gauge sets, that A(10)1,V = 3/2, which is extremely good for a sum of 6,354 diagrams! Even the sign comes out right :)

Suppose each diagram contributed ≈ ± 1 (actual numbers are more like ≈ ± 100) and they were statistically uncorrelated. Then one would estimate A(10)1,V ≈ ± 80. Cancelations that lead to A(10)1,V ≈ 10 are amazing.

Of course, asymptotic series can be very good, and all of this is a wild guesswork and means nothing until there is a method to estimate these sums. One thing that would make the finitness conjecture more convincing would be to check how close individual gauge sets are to ± 1/2, but I do not think that data is available - does anyone evaluate individual vertex diagrams, rather than the self-energy diagrams?

I am even more hopeful (June 2017):

I've now reread all the relevant literature, and see several ways ways forward. It is a longer story, for an overview, please see my blog.