Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Do solutions to nonlinear classical field equations have any bearing on its quantization?

+ 6 like - 0 dislike
6357 views

The question is motivated by this paper and review. As we know, for free field equations, or more generally linear field equations(e.g. Dirac equation minimally coupled with external EM field), one way of making a QFT out of it is that we take the solution space to be the 1-particle Hilbert space, define creation/annihilation operators, and then go on to define the Fock space with the correct statisitics. And if looking backwards after the QFT has been constructed, the classical field equation can be obtained by considering the 1-particle "wavefunction" $f(x):=\langle 0|\hat{\psi}(x)|f\rangle$ , it obviously satisfies the field equation. For nonlinear field equation this correspondence ceases to exist, for simply $\langle 0|\hat{\psi}(x)|f\rangle ^2\neq \langle 0|\hat{\psi}(x)^2|f\rangle$. In this case the role that classical solutions play becomes obscured. My question is: Do solutions to classical field equations have any bearing on its quantization? What is it, or at least what should it be?

PS. This is probably related to a years-old question I asked before, after which I shamefully did not try to dig deeper: From quantization under external classical gauge field to a fully quantized theory

asked Aug 9, 2014 in Theoretical Physics by Jia Yiyang (2,640 points) [ revision history ]
edited Aug 9, 2014 by Jia Yiyang

2 Answers

+ 4 like - 1 dislike

Yes, classical solutions of any theory have a lot to say about their quantization, as they are the $\hbar\to 0$ limit of the quantum theory. Essentially, they determine by themselves the tree level approximation of any quantum theory. The reason is that $k$-loop corrections scale with $\hbar^k$, although this is not seen in many treatises where $\hbar$ is set to $1$ everywhere. For tree diagrams from classical fields see, e.g., 

R.C. Helling,
Solving classical field equations,
Unpublished manuscript.

The classical action is the $0$th order approximation of the effective action of quantum field theory. The tree level approximation of the S-matrix therefore produces classical approximations to all the stuff people can compute from a QFT. (A non-comprehensive list of what this is can be gleaned from the last paragraph of Chapter 11.5 [in my 1995 edition] of the QFT book by Peskin & Schroeder, which starts with ''This conclusion implies that $\Gamma$'' [the effective action] "contains the complete set of physical predictions of the quantum field theory".)  

The approximate spectrum in the classical limit, and its relations to scattering, decay rates, and bound state formulas, is (briefly) discussed on p.416 of

L.G. Yaffe,
Large $N$ limits as classical mechanics,
Rev. Mod. Phys. 54 (1982), 407--435.

This interpretation of the quantum world in terms of the classical is important in quantum field theory when it comes to the explanation of perturbatively inaccessible phenomena such as particle states corresponding to solitons, or tunneling effects related to instantons. Indeed, standard perturbative methods do not capture certain phenomena that appear only at infinite order in perturbation theory (i.e., nonperturbatively) and are related to a semiclassical analysis of soliton and instanton solutions of classical field equations.  See, e.g., the paper 

R. Jackiw,
Quantum meaning of classical field theory,
Rev. Mod. Phys. 49, 681-706 (1977).

However, Jackiw's explanations are mathematically vague.

Geometric quantization is the program of quantizing a classical theory given on a symplectic manifold. This works quite well in quantum mechanics, but its extension to QFT is at present more an art than a science.

A classical Lagrangian dynamics in a finite-dimensional symplectic manifold can be quantized by deformation quantization and results in $\hbar$ expansions of the corresponding quantum theory in the Hamiltonian formalism. In infinite dimensions it works when the manifold is a Hilbert space (i.e., a linear manifold), which has a canonical symplectic structure, and then gives Berezin quantization.

The paper mentioned in your question tries to extend this approach to an infinite-dimensional nonlinear symplectic manifold, namely the manifold of solutions of the classical Yang-Mills equations, which is a symplectic space with the so-called Peierls bracket. (Since the space of solutions is parameterized by the space of initial conditions, this is equivalent to working with the space of consistent initital conditions, which is the setting actually used by Dynin.)

answered Aug 9, 2014 by Arnold Neumaier (15,787 points) [ revision history ]
edited Aug 9, 2014 by Arnold Neumaier
Most voted comments show all comments

The classical solution represents the VEV of the ground state only to $0$th order in $\hbar$. The exact VEV is the classical solution of the equations corresponding to the effective action, which includes all quantum corrections.

The answer is nontrivial because QCD contains fermions; @40227: please ask this as a separate question.

@ArnoldNeumaier, @RonMaimon, I think the key disagreement, if I understand you guys correctly, is that Arnold thinks classical solutions represent the $\hbar\to 0$ limit, while Ron thinks on top of that one also has to work in the weak coupling regime. Do I understand you correctly? What argument do you think can settle this?

We agree that $\hbar\to 0$ is both the weak coupling limit and the classical limit.

We do not agree about the interpretation for large $\hbar$.

Ron's argument is that the solution for large $\hbar$ is qualitatively very different. This is probably true but it doesn't prove his claim that a quantization of the classical Kaehler manifold is nonsense (if done correctly, i.e., with an improved renormalization treatment).

Indeed, already for a simple anharmonic oscillator such as the Morse oscillator (known to be correctly quantized in precisely the way Dynin tried to do for YM) , the large $\hbar$ properties are qualitatively very different from the small $\hbar$ properties. (In this case, the Kaehler manifold is simply the space of complex numbers, and the coherent states are those for the harmonic oscillator, the eigenvectors of the harmonic annihilation operator.)

I know that the interacting Hilbert space cannot be the free one (Haag's theorem). But this is a matter of correct renormalization and not a defect of symplectic quantization itself. The inequivalent interacting Hilbert space is related to the Fock space by a Bogoliubov transformation, which is well-defined in a regularized theory (such as a lattice theory), before taking the IR and UV limits, where the resulting transformation becomes nonimplementable. Therefore the problem in  Dynin's paper is not his starting point but that he doesn't renormalize correctly.

Most recent comments show all comments

I am not making a claim that requires proof, I am making a heuristic that says that a particular method of quantization cannot possibly work for gauge theory (and a separate much easier to substantiate claim that a particular author in addition did it incorrectly).

This is different from claiming that a particular method does work, the arguments are not rigorous, and I don't know how they can be! When you say "such and so method cannot work because X and Y and Z", there is no rigorous statement of this, let alone proof. The only thing you can prove is a statement, how do you state "this method cannot work"?

But despite being completely devoid of rigor, this claim is still precise. The precise meaning is that the weak coupling/small $\hbar$ expansion of gauge theory is qualitatively completely different from the strong coupling long-distance behavior, and this can be justified (not yet proved) by examining the numerical descriptions we have for the long-distance behavior. In this case, the long-distance behavior is totally uncorrelated random gauge fields, while the short-distance behavior is rigidly constricted fields which are making small fluctuations close to a perturbative vacuum. The two descriptions interpolate each other, the gauge fields slowly oscillates more and more wildly as you look to longer distances on a lattice, and in the longest distances, the Euclidean link-variables (the transport along long lines) is as random as if you threw independent dice to pick their values, they are totally uncorrelated at distances greater than the confinement length. This behavior is about as similar to the classical deterministic global solutions to Yang-Mills equations as it is to a cucumber.

I am sure that the Morse oscillator (and any other 0+1 d quantum system) may be quantized by methods of symplectic quantization, there is no obstacle to doing this in any 0+1d system, or any integrable 1+1 field theory model. The reason is that the kinematics of 0+1 d systems is trivial, any one such system has the exact same Hilbert space as any other, it's always square-normalizable wavefunctions, or their non-normalizable plane-wave or distributional limit in the case of infinite volume or when you want to talk about x-eigenstates. You can describe them using Harmonic oscillator states, or using free-particle states, you have a lot of freedom, because the kinematics is separate from the dynamics.

The path integral counterpart to this property is that the short-distance fluctuations are always exactly the same for any potential, and always controlled by the kinetic term in 0+1 d, in Euclidean space it's always Brownian motion locally. In renormalization group language, all potentials are short-distance irrelevant, they disappear if you look at microscopic distances, and they disappear quickly.

By contrast, when you are doing interacting field theories, the kinematic terms renormalize along with the interaction terms--- the fluctuations at all distances are altered. In the case of gauge theory, the running asymptotically vanishes at short distances, so that the theory is close to free at teeny-tiny distances, and the running stops, but this happens very slowly, and if you use the classical equations appropriate to the continuum coupling, these classical equations have zero coupling, and since the classical coupling doesn't run, they have zero coupling at all scales.

The classical Yang-Mills equations are scale invariant, they don't have a length at which anything is different. When you pretend to "quantize" them in a symplectic way, you are simply choosing one particular coupling, relevant at one particular scale, and looking at small fluctuations around this coupling, which (as long as the coupling is small) defines the fluctuation dynamics at one particular (small) length scale only. The randomization of this stuff at long distances is not in the description, and the correct log-running kinematics, the short-distance properties of the field fluctuations, are not correctly described.

So the program is simply hopeless for attacking Yang-Mills theory, and not in a simple way--- it is also hopeless for attacking any other quantum field theory with kinematics which are not free, the Hilbert space of the interacting theory is simply different from the Hilbert space of any free theory.

The reason I am so adamant about this stuff is because in physics, the problem of Yang-Mills existence and mass-gap is entirely solved. It is solved by lattice gauge-theory, which gives an algorithm for computing the correlation functions, and also for taking a continuum limit. All the properties are understood qualitatively and (with enough computer time) quantitatively.

Any claim that you "quantize Yang-Mills" has to make sense relative to the computational formulation, and the claims from symplectic quantization don't make any sense when translated to this language, and therefore don't make any sense on their own.

Yes, I agree that if you discretize, everything is ok, and if Dynin had done the quantization correctly, the only place where he would run into problems would be in the renormalization. I agree with you regarding the problems in correctly implementing symplectic quantization of field theories.

But Dynin is doing an incorrect cartoonish quantization which has nothing to do with a correct quantum field theory, so the problems with his paper are much deeper! You can see what he doing in section 3, he is defining a particular set of free raising and lowering operators which do not have any resemblance to any real Fock space of short-distance Yang Mills. As an exercise, I urge you to repeat Dynin's construction for gauge coupling =0. You will not reproduce free field theory.

If he had done a correct quantization, he wouldn't be able to conclude that the states in the theory are a Fock-like tower on top of an empty vacuum, that is simply not true even in everywhere weakly coupled interacting theories. There is no Fock-like description of the gauge vacuum which is a "N=0" state of a free field theory of gluons, it just doesn't look like that. It looks like a sea of infinitely many coherent gluons whose typical momentum is $\Lambda$ the inverse confinement length. These gluons are the manifestation of the gauge condensate, the nonzero expected value of fluctuations in the gauge field in the gauge-field vacuum, which needs to be there, because the Euclidean gauge field is totally random at large distances.

His description is defining raising and lowering operators which give you noninteracting excitations with no renormalization (because they are always noninteracting). These excitations have an energy which he defines to be the excitation number times a classical energy, and this is what he used to get the mass bound. He is doing nonsense.

The statement that you can define quantum Yang-Mills from classical Yang-Mills can only be true locally. Even then, there is the issue that the coupling in the classical theory at the tiniest continuum distances is zero. So you need to start with a classical theory at infinitesimal coupling, and then produce configurations of the quantum theory at gradually larger coupling and larger distances, and, sorry, this is just hopeless from the global classical theory, because the global classical equations in this case are simply free, and their quantization gives non-interacting gluons. You might be able to paste these together at long distances to reproduce the random fluctuations, but the problem is then the same as taking the continuum limit on a lattice, or any other formulation, looking at classical Yang-Mills as the starting point buys you nothing, and just makes a lot of obfuscation in the mathematics of defining the kinematic space.

+ 3 like - 0 dislike

The answer to this question, as posed, is "yes", but not in the way implied by other answers. In particular, the long-distance properties of the Yang-Mills vacuum are not described by considering small fluctuations of global classical solutions of the classical Yang-Mills equations. The local properties of Yang-Mills fields (near any one point, in a radius much smaller than the confinement radius) are described well by fluctuations around classical field theory. But the path-integral paste together these local descriptions into something monstrously quantum at long distances.

To understand quantum Yang-Mills, you should first examine the lattice version in detail, because this can be used to define the theory algorithmically, and compute all correlation functions. The perturbative formalism for Yang-Mills is secondary, because it is only justified at weak coupling, it is a weak coupling expansion.

 In the lattice formulation, you have a G valued matrix field on each link, and an action which is $1/g^2$ times the sum over each plaquette of the trace of the holonomy on the plaquette. You can do something on the lattice which are interesting and different from what you do in perturbative field theory,  you can ask what the perturbation theory looks like at strong coupling, when g is effectively close to infinite.

In this case, you start with 0 action as your leading approximation, and every matrix on every link is chosen uniformly randomly according to Haar measure. With a small 1/g^2, you have some correlations between the matrices on neighboring links, but the main result of the strong coupling expansion, described in classic lattice papers of Wilson, is that the correlations die off at long-distances, so that there is an exponential decay of all correlation functions in the strong coupling expansion to any order.

The qualitative reason for this behavior is simple to understand--- the field is nearly random on every link, except for a little bit of correlation between neighboring links. When you look at the product of the G's on two links going in the same direction, the product is closer to random than each factor, simply because multiplying together large random elements of a gauge group fills the gauge group ergodically.

This observation is intuitively obvious (and also correct), the result of looking at the gauge field at long distances is no different from shuffling a deck of cards, you are doing a random walk with large steps in a gauge group as you take each step on the lattice. This intuitive explanation is not Wilson's argument, which was different--- he showed that if you place colored sources far away, the correlations in the strong coupling expansion must follow a string between the sources, and the fluctuations of the string are small at large coupling. He also showed how to compute the correlation functions using these correlation strings, which dominate the long-distance correlations at large distances.

What this means is that the correlation functions in gauge theory are exponentially decaying at strong coupling, and the long-distance theory is entirely random, corresonding to a Euclidean action which is zero. Zero action means completely uncorrelated ultra-local fields, in other words, at scales far below the confinement scale, you don't see any gauge field, the dynamics is confined, and the exponential decay rate is (by Euclidean definition) the mass gap.

This is completely the opposite behavior than at short distances, where the coupling is weak, you have a good perturbation theory, and the fluctuations are described by a nearly classical loop expansion. The place where classical solutions describe the dynamics is in this short-distance limit.

In this limit, you have 4d instantons, and 3d instantons which are similar to monopoles. These objects, when their classical radius is small, are appropriate for describing effects in the gauge theory. Instantons in 4d are least Euclidean action configurations which (in Lorentzian space) describe tunneling between different classical vacua, the 2+1 instantons, which, when you line-extend, become weird monopole-like things, can be superposed statistically in an instanton gas, and then they have the effect of randomizing the 2+1 dimensional gauge field, as shown by Polyakov. This allows you to understand the interpolation between the short-distance physics, which is described by the classical theory and perturbative small fluctuations, and the long-distance physics, with mass gap and no correlations and zero action. The classical configurations, when placed randomly at a given density, can randomize the gauge field in 3d.

When instantons were discovered in 1976, David Gross and others hoped that a sea of instantons would randomize the gauge field at long distances to make the long-distance limit emerge from an instanton gas calculation and prove that there is a mass gap. This is not what happens, the instantons by themselves don't randomize the gauge field. However, in recent years, Argyres has considered the case of compactified 4d gauge theory, and in the limit that the dimension of the box is shorter than the confinement scale, you have to reproduce Polyakov's mechanism for mass-gap in 3d gauge theory. You do reproduce it, and the configurations which are involved are line-extended versions of Polyakov's solutions.

These line extended things can close into loops when you open the dimension fully, and in this case, you expect that these monopole loops will produce the randomization somehow, and these classical solutions can inform the long-distance limit. But the main ingredient here is the random placement of these solutions at long distances, that random scattering of instantons, or monopole loops, is what randomizes the gauge field at enormously long distance in the continuum picture, when you look at the description at weak coupling.

The randomization at long distances is the central mechanism for the mass gap, and this means that the classical descriptions are literally confined to a bag, beyond which you need to consider configurations which randomly fluctuate between one classical region and another. The gauge vacuum can be described by classical configurations only locally, the gluing between local descriptions has to reproduce the totally random fluctuations at long distances.

The mathematics work which starts with classical field theory starts with global classical solutions to Yang Mills theory, which, is simply mentally defective. If the procedure looked at local solutions, not global solutions, maybe there would be some way to paste together the close-to-classical behavior at each patch into some sort of description, but that's not what they do. They just consider global solutions to the equations and then try to produce a quantum theory from this object by deforming these classical global solutions. It is a procedure which is hopeless, because the solutions to classical Yang-Mills are rigidly deterministic, while the Euclidean gauge theory at longer scales is entirely randomized, so that the qualitative properties are entirely different at distances longer than the confinement radius.

It is possible that you can produce a rigorous construction of quantum field theory starting from the solutions to classical field theory at short distances, and patching them together at long distances like the definition of a manifold. Hairer in mathematics has done exactly this for solutions of stochastic PDE's whose stationary distributions are Euclidean versions of quantum theories, i.e. stochastically quantized field theories. His approach uses a "regularity structure" which is used to define the patching of the different regions together, and it reproduces the correct operator product relations in these theories.

His approach reproduces the renormalization of low-dimensional theories, it works, and can be used to define 4d Yang-Mills (apart from technical details about how to control the operator dimensions at short distances, which he doesn't know how to do in the case of logarithmic running). Approaches which simply try to take global solutions of Yang Mills and globally reconstruct the quantum theory from these are totally wrongheaded, they cannot produce the quantum theory, because they don't consider the randomized configurations at long distances.

answered Aug 9, 2014 by Ron Maimon (7,730 points) [ revision history ]

Sorry, but I cannot agree. What happens when the continuum limit is taken? Besides, why are you keeping on neglecting other works on lattice?

You cannot agree with what? This is what happens on a lattice regardless of sources, you can simulate it for yourself on your computer, and see how the gauge field randomizes at long distances. It's not controversial for 30 years already. All the other lattice studies know this already, I haven't said anything that isn't very old common knowledge.

Ron, I cannot agree that your conclusions apply in the continuum limit. You keep on evading my questions. A QFT theory on random fields cannot exist. Call this Frasca's conjecture if you like but it is so. Also, I am not able to see how your scenario applies to the Higgs mechanism. Finally, say me something about the work people did on Yang-Mills fields on the lattice in the last ten years (e.g. Regensburg 2007). Why neglecting this?

A QFT theory on random fields is trivial--- it is usually an ultralocal field, with classical equation "field=0" (that's for Gaussian fluctuations, for example, a scalar field with action $S=\int \phi^2$. For the case of Gauge theory, there is no classical equation because the action is zero, the path integral is a pure average over all configurations of gauge field), and action which has no derivative terms. This is not a "scenario", it's 30 year old 100% established QCD physics.

You can download an equilibrated QCD configuration, and look at the correlation between the gauge field product over a long line here and a long line there, it vanishes. This is the statement of mass gap in Lattice QCD, and it is not controversial.

The mathematical description of this is in Wilson's original lattice QCD paper, where he defines the strong coupling expansion. This expansion is around the point $g=\infty$, which is not only sensible, it is trivial--- it is the ultralocal limit where every link is statistically independent of every other link.

Please take a few days to simulate pure SU(2) Yang-Mills theory, and you will see this for yourself. It is very easy to do with modern hardware, you can write the program yourself, run it, and analyze it over a free weekend. If you are lazy, there are canned routines too.

Please make ''e.g. Regensburg 2007'' unambiguous by giving a link.

Thanks Ron, your physicist-style of intuitions and narratives are much appreciated from me, every time it makes me feel I understand a bit more, although I don't think I truly understand them until I can substantiate them with some concrete technical details. But, let me just try to reconstruct the elephant here a bit, you said

The qualitative reason for this behavior is simple to understand--- the field is nearly random on every link, except for a little bit of correlation between neighboring links. When you look at the product of the G's on two links going in the same direction, the product is closer to random than each factor, simply because multiplying together large random elements of a gauge group fills the gauge group ergodically.

But what is the reason that this argument does not apply to abelian theories such as (lattice)QED?(I take from your wording that this is something special about nonabelian theories)

The argument does apply to Abelian theories to a certain extent, the strong coupling expansion of QED also has behavior like this. But lattice QED with a compact U(1) is different from continuum QED, in that it has necessarily monopoles, which are configurations twisting around the lattice with Dirac string running between the lattice sites. The strong coupling limit has monopoles everywhere, because they appear by random chance (the field is totally randomly twisted)!

The randomization in this case (if I remember correctly, it was one of the issues in accepting Wilson's lattice work in the late 70s early 80s) is produced by Kausterlitz Thouless transition of the monopoles or something like this--- it can be described by the density of topological defects in the QED. It's something that doesn't happen when you remove the lattice regulator, or decompactify the U(1) into a line.

In QED, unlike in the nonabelian gauge theory, the coupling doesn't determine the size of the group circle, and if you make the circle enormous, or infinite, the monopoles go away, and the lattice theory is free, and the fluctuations never grow to cover the circle, or if it is a line, they can't cover the line.

I am not sure about the details of this stuff, as I never simulated it or worked it out (not even, to my shame, Kosterlitz-Thouless, which was inspired by this), but I did know there was controversy over this exact question--- why does strong coupling expansion confine QED, and I know it was fully sorted out. The 1980s literature resolves the issues with strong coupling expansion entirely, it got accepted, and the stuff I said is correct for SU(2) or SU(3) or any group other than U(1), in which case there is this subtlety (because the theory is ultimately free)
 

Again, this is lattice computations in the thirty years before the last ten. What can you say in the continuum? Why are you forgetting last results going around with things that are useless for this matter?

Regesburg 2007 was a conference, Lattice 2007, where a momentum in lattice computations happened. Yang-Mills theory was evaluated on a lattice of 127^4 points for SU(2). So, we know for certain that Ron argument is simply wrong. If this is not enough, we also know that the theory has a well defined spectrum that appears to be not limited from above. See works from Teper&al. and Morningstar&al..

Please be a bit more generous with your citations. I couldn't find Morningstar in the list of abstracts of the conference, and Teper's talk was about ''The running of the bare coupling in SU(N) gauge theories'', not about lattice simulations. Maybe point to more substantial papers or even reviews?

Sorry Arnold. Ok I put here a list of relevant references on pure Yang-Mills theory:

Cucchieri&Mendes, Lattice 2007, (127)^4 pdf

Teper, Lucini & Wengers, Spectrum of pure Yang-Mills, arxiv appeared in JHEP.

Morningstar&al., Spectrum of pure Yang-Mills, arxiv appeared in PRD.

About the running coupling, this can be found in a PoS by the group that computed it here. This should give a modern view on what is going on about Yang-Mills theory without quarks. Triviality has nothing to do with randomness.

Why are you putting these irrelevant papers up? It's better if you put them in the review section anyway. I never said anything about triviality at all, I was talking about the long-wavelength limit of pure Yang-Mills theory.

I think you are referring to where I said that QCD is "trivial" in the infrared. I meant by this that Lattice gauge theory at long distances flows to the lattice version of the theory with infinite coupling, i.e. zero action, so that each link is independent from each other link. It's "trivial" in the sense of being easy to describe, not in the sense of Landau's triviality.

These papers are not irrelevant at all. They state the status of Yang-Mills theory from lattice studies as for today and are far far away from what you are claiming.

These papers are NOT contradicting anything I said! I have said nothing that is controversial in any way, the theory hasn't changed, I have simulated it (briefly once), lots of people have simulated it, there is nothing to update, these things I am telling you are simply facts.

The papers you are giving me are doing this stuff in this gauge, stuff in that gauge, whatever! It's interesting, but none of them change at all the simple observation that pure gauge theory in the infrared is described by the strong coupling expansion fixed point, i.e. totally statistically completely uncorrelated link-variables.

They can't change it, because it is true, it is the basic observation you see in simulation and in strong coupling expansion, and this statement is the actual computation statement which is "mass-gap" in numerical lattice gauge theory--- the complete independence (or more precisely, exponentially decaying correlation) of gauge links at distances longer than the confinement radius. it is observed in all lattice studies, and nothing can ever change regarding this, it is an obvious observation that you can see for yourself if you take a few hours to run a simulation.

You are asserting nonsense with such confidence, so as to do what is called FUD regarding this answer. This borders on psychopathology as far as I am concerned. You don't need to read anything to understand these things (I didn't read anything), just simulate lattice gauge theory once, briefly, and think a little about strong coupling expansion.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar\varnothing$sicsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...