This paper is wrong.

Let's start with some opportunism: Frasca does not need to wait for others to cite his papers. When he writes a new paper, he makes a new version of older papers with *forward references*! It's good that among his many discoveries, Frasca has invented a time machine, for this alone, +1 originality from me.

The paper starts with a "classical equivalence" (which, if it would work, would be a classical embedding). This purportedly takes a classical solution of scalar $\phi^4$ theory and converts it into a solution of gauge theory. Let's explain the method: starting with the gauge theory Lagrangian:

$ S = \int {1\over 4g^2} F_{\mu\nu} F^{\mu\nu} $

with $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu + [A_\mu, A_\nu]$

He takes SU(2), because if SU(2) works, so does any other LIe group, because they all have an SU(2) embedded inside. Then he makes the ansatz:

$A_x = \sigma_x \phi$

$A_y = \sigma_y \phi$

$A_z = \sigma_z \phi$

and substitutes this ansatz into the action. He notices that the potential part of the action reduces to a quartic. He then integrates the linear-in-derivatives terms in the action by parts, and notices he gets the Lagrangian for scalar $\phi^4$.

Yay! Equivalence proved!

Wait, is it? Hmm.... consider the point particle Lagrangian

$S = \int \dot{x}^2 + \dot{y}^2 + (x^2 -xy) $

substitute the ansatz

$x = f(t)$

$y= f(t)$

it reduces to a free particle. Does this mean that free particle solutions $x=y=at+b$ are also solutions of this Lagrangian? I'll let you check yourself.

Surprise, surprise, you can't just impose a constraint in the Lagrangian, you need to check that the variations in the directlons perpendicular to your constraint surface are zero! Normally, when you impose a constraint, there are constraint forces keeping you on the constraint surface, and these have to vanish for the constrained solutions to be compatible with the equations of motion. This is covered in undergraduate Lagrangian mechanics textbooks, so it would normally not be strictly proper level for physics overflow.

Ok, so the proof is wrong. He must have found the mapping by trial and error, and the mapping still works, even though the proof is faulty. So all is ok in the end. Right?

Heck no! There is absolutely no embedding of any $\phi^4$ solutions into gauge theory with this ansatz. The equations of motion are just not satisfied.

To see this, it's enough to check the free case, you know, where g=0. In this case, when you properly rescale the vector potentials, the theory reduces to free independent U(1) gauge fields in the 1,2,3 direction, and his ansatz is for each of these vector potentials to have one component equal to the solution of a free scalar field theory. Does this satisfy Maxwell's equations? Hmm... let's check the electric fields, whaddayaknow! All the spatial derivatives of this scalar have to be zero. This is something you would expect a person who claims to have embedded solutions of one equation into another to check for himself, it's not rocket science.

Ok, so there is no classical embedding. Surely, he must have explained something about the quantum theory correctly.

Nope. His paper talks nonsense about strong coupling limit of scalar field theory for a while, and then makes the claim that the beta function for QCD makes the coupling go up for a while as you go to the infrared, in the regime where the theory is weakly coupled, and then the beta function somehow *turns around* and makes the coupling run to zero again in the infrared!

This claim is what sparked the discussions here, as this behavior is completely at odds with the known strong-coupling limit of gauge theory on the lattice in the infrared.

But it is also just manifestly ridiculous for a one-dimensional running to turn around! If there is only one parameter that is flowing, there is a topological obstruction to turning around, which is that if you are going backwards in 1d, you are going over the same region you already covered.

What is true in the observation is that gauge theory is simple at both limits, of far-infrared and far-ultraviolet. But the quanta with small interaction in the infrared are glueballs of various integer spin, while the interacting quanta in the ultraviolet are gluons. They are described by completely different theories, and one does not run into the other in a simple way.

But this claim got the paper some traction, because lattice groups in 2007 noticed that the Gribov-Zwanziger ansatz for gauge theory doesn't work to predict Landau gauge correlators in the infrared, at least in their large simulations. They found that gluon propagator didn't vanish at q=0, but went to a constant, and the behavior, as Frasca has emphasized, can be fit by assuming a massive particle propagation is contributing to the gluon propagator.

A reasonable candidate for what was propagating in the lattice simulations is a vector glueball, like the $\rho$ in QCD (except not the $\rho$, because this is a quark-condensate excitation which is absent in pure gauge theory). This would mean that the Gribov-Zwanziger ansatz is wrong, that the gluon propagator somehow mixes with physical states at long distances, or, in 1960s terminology, that it "reggeizes". I don't know if this can happen, presumably the reason that it can't happen is simply that the gluon has color, while the propagating excitations at long distances are all colorless.

But because these are lattice simulations, another possibility that I see is that the operators they are using to create and annihilate gluons are not pure operators for gluons, they are lattice link operators, so they include higher order terms in expanding $exp(iA)$. The higher order terms in Landau gauge might be linking the vacuum to a colorless vector state which is propagating. This possibility is easily numerically distinguished from the previous one in that the long distance constant limit of the gluon propagator would shrink slowly as the lattice is made finer, but not as the volume is made bigger. These extra corrections depend on the lattice scale, they will disappear when the fluctuations in the integrated vector potential shrink enough to make the higher order terms vanish. But any such lattice artifact will not shrink too quickly because the scale of fluctuations in a lattice simulation only shrinks logarithmically, so that exponentiation errors in identifying the vector potential operator don't go away until the lattice is teeny-tiny.

I don't know which of these options is the right one. What I do know is that Marco Frasca's explanation is nonsense.