Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  How to understand the success of perturbation theory in QFT, despite Haag's theorem?

+ 8 like - 0 dislike
11036 views

My chief reference for Haag's theorem is the article by John Earman and Doreen Fraser, which I read quite some time ago, so let me disclaim first that I might be prone to make inaccurate claims.

As we know, the perturbative techniques in QFT are derived using interaction picture and Dyson series, which according to Haag, should be wrong, probably even on a qualitative level. I can kinda heuristically see renormalization procedure should fix the problem to some extent, but logically the perturbation series come first then we do the renormalization on top of that, so something circular is happening if we use renormalization to explain the success. I have two questions:

(1)Is there a way to completely avoid interaction picture to reach the renormalized perturbation series?

(2)Even if there is such a way, what is the most reasonable way to explain the success of the derivation based on interaction picture?

I vaguely remember seeing two opinions back in PSE, one is from Lubos saying since QFT is well known to be an incomplete theory, one cannot take QFT too literal and Haag's original assumptions should not be taken too seriously. But this still begs the question why perturbation theory works at all, except for the fact it agrees well with experiment, Lubos' reasoning sounds more like "Theoretically, you can't really prove perturbation theory is faulty, I can't prove it works, but experiment is on my side." Also, would Lubos' reasoning be simply invalid in the context of Yang-Mill's theory, which is widely believed to be a complete theory? The other opinion is from user1504, suggesting one should simply take perturbation theory as a hypothesis. But this is really a big and not-self-evident hypothesis, and certainly not aesthetically favorable. What are your opinions? 

Update: I found the post, both Lubos' and user1504's opinions are contained in this PSE post, I hope I'm not misrepresenting their points.

asked Aug 18, 2014 in Theoretical Physics by Jia Yiyang (2,640 points) [ revision history ]
edited Aug 18, 2014 by Jia Yiyang

Discussion between Vladimir Kalitvianski and Ron Maimon regarding the relation to renormalization (and lack thereof): http://physicsoverflow.org/22462/discussion-between-vladimir-kalitvianski-regarding-theorem?show=22463#a22463

2 Answers

+ 7 like - 0 dislike

Invariant perturbation theory in quantum field theories were not first derived using interaction picture and Dyson series, Feynman derived them from a path integral, Schwinger from an action principle, while Stueckelberg used his own methods which were derived from (so-called) Schwinger-Dyson equations and canonical commutation relations, which are the content of the path-integral rewritten as differential equations. All these methods are manifestly invariant, and do not talk about the unitary relation between free and interacting vacuum.

Perturbation theory can be derived using interaction picture, but this derivation is hokey, because the unitary relation between free and interacting fields is not well defined in the infinite volume limit. You can perform the calculations with a finite volume, and take the limit explicitly at the end to see what is going wrong.

The path integral defines the perturbation series without reference to any free-particle vacuum. The path-integral is well defined in Euclidean space with only an ultraviolet regulator, and it does not have a Haag problem, in that it doesn't need to worry about relating the interacting vacuum to a free vacuum by a unitary map. The relation between path-integral correlation functions and scattering states is given by the LSZ construction, and there are no issues with vacuum bubbles, they factor out.

Haag's theorem is really nothing deep--- it's the statement that in any realistic quantum field theory, the true vacuum always has an infinite number of particles when compared to the bare vacuum. In a box, the true vacuum becomes orthogonal in the limit to the bare vacuum, because the amplitudes to leave the vacuum grow vanishes as the square root of the volume, and the new vacuum has an expected density of particles, so that the probability that there are zero particles exactly (the inner product with the Fock vacuum) vanishes exponentially fast . The reason there are always matrix elements out of the Fock vacuum is that when you can scatter, then you can pair-produce (by crossing), and this means that the inner product of the interacting vacuum with the bare vacuum is strictly less than one, because it also includes two particle states. But the production rate of particles in the bare-vacuum is per-unit-volume (by translation invariance), and if it is nonzero in any volume, it is infinite, so the inner product must vanish. You can't fix this, it doesn't depend on ultraviolet details, it's just a property of large-volume. The true vacuum always contains an infinite number of particles defined relative to the bare vacuum. Surprisingly, this part of the argument is reviewed ok in the linked Earman article, except verbosely.

The reason the interaction picture doesn't care about the Haag issue is just because it is not describing the transition probability between bare and interacting vacuum, it is describing the transition probability for scattering. In the Dyson-style interaction picture, there's a turning on-and-off function f(t) that describes the interaction strength. This turning on and off is supposed to be adiabatic, so that the bare vacuum slowly relaxes into the interacting vacuum, the scattering happens, and then you turn off the interaction again.

If you were to ask, in the interaction picture, what is the probability for the vacuum to remain the vacuum between any two time slices with different values of f(t), this amplitude includes a volume integral, this second order term will produce the same infinite transition rate, and show you that the physical vacuum is completely orthogonal at any two times.

Who cares, really. You never ask this question. You are interested in scattering of physical particles. Here, you are looking at excitations, and you jiggle the interaction Hamiltonian to fix their interactions (and the vacuum energy) by renormalization prescription. You could imagine doing all this in a box, where the adiabatic prescription for the Dyson turning on and off makes sense.

To see that Haag's theorem is not an artifact, or nonsense, that it's physical, consider the situation in QCD. Here, the noninteracting vacuum consists of free quarks and gluon excitations with a small scattering. But at any infinitesimal coupling, you will eventually churn the vacuum completely, so that a free quark state gets an infinite mass! In this case, the infrared behavior clearly takes the free quark states and free gluon states right out of the Hilbert space of the theory.

This doesn't happen in a finite volume where you adiabatically turn on the interaction. The ultimate explanation for why Dyson's derivation works well is that it is justified in an appropriate regulator: it is correct in finite volume box. Dyson is considering quantities such as scattering quantities which have a sensible limit as the box is made big. He isn't considering the H-atom, which only sticks around as long as the coupling is turned on.

answered Aug 18, 2014 by Ron Maimon (7,730 points) [ revision history ]
edited Aug 25, 2014 by Ron Maimon

Thanks Ron. I can not fully understand some of your statements, would you clarify the following?

(1)You mentioned bare vacuum, true vacuum and interacting vacuum. Obviously bare and true vacuum should be states that be annihilated by free and full Hamiltonian respectively, I guess "interacting vacuum" means $V(t)|0\rangle$, where $|0\rangle$ is the bare vacuum and $V(t)$ is the unitary operator that relates the Heisenberg and interaction picture? But then you said 

In a box, the true vacuum becomes orthogonal in the limit to the bare vacuum, 

and 

 and this means that the inner product of the interacting vacuum with the bare vacuum is strictly less than one, because it also includes two particle states. But the production rate of particles in the bare-vacuum is per-unit-volume (by translation invariance), and if it is nonzero in any volume, it is infinite, so the inner product must vanish. 

So you are saying both the interacting and true vacuum are orthogonal to the bare one in the infinite volume? And this pair-production argument applies to both vacua?

(2)I don't understand your crossing argument. It seems you want to obtain a pair production process after crossing, i.e. vacuum to two-particle state, then before crossing isn't it just a single particle state propagating? Then what do you mean by "scatter" in the statement

 If you can scatter, then you can pair-produce (by crossing)

(3)You said 

In a box, the true vacuum becomes orthogonal in the limit to the bare vacuum, because the inner product vanishes as the inverse square root of the volume. 

How to see this inverse square root dependence? Do you have a concrete model or is it just from the pair production rate argument? I tried for myself to do a concrete calculation in a finite box, I tried to find the "wavefunction" of bare and true vacua, but by translation invariance they should share the same wavefunction, i.e. a constant function $\psi(x)=\frac{1}{\sqrt{V}}$, and their $L^2$ inner product is always 1. 

To make the language clear: there are only two vacua--- the bare vacuum, the eigenstate of the free Hamiltonian, and the interaction (or true) vacuum, which is an eigenstate of the sum of bare vacuum and interaction Hamiltonian.

In a finite volume, with an ultraviolet regulator, both concepts are well defined, and the two states are unitarily related, and you can figure out the relation between them. In an infinite volume, the interacting vacuum runs away from the noninteracting Hilbert space, because of obvious volume dependence of the amplitudes.

The crossing argument is that if a relativistic theory has any sort of elastic scattering, for example, electron photon elastic scattering, then it must have pair production in the vacuum. The reasoning, in the context of old-fasioned perturbation theory, is very simple--- every local field is a sum of creation and annihilation operator, so that the interaction term that annihilates one electron and photon, and creates an electron also contains a term that just creates an electron and positron and photon, and this term contributes to mixing the vacuum with a photon/electron/positron state (in a noncovariant formalism). When applying these Hamiltonian terms to the box vacuum, you end up with a production rate for Fock bare particles, so the second-order perturbation theory makes an interacting vacuum which is a mixture of Fock vacuum and Fock free particles.

There is always elastic scattering, that's Pieirls' shadow-theorem, that elastic scattering cross sections are greater than inealstic scattering (there is always an elastic shadow of inelastic scattering for unitarity reasons--- see "Surprises in Theoretical Physics" for Pieirls review), and general crossing in relativistic theory tells you that any amplitude for scattering can be turned into a process of vacuum creation.

This vacuum nonsense factors out of scattering calculations, that's one of the virtues of modern formalisms, but it doesn't factor out of calculations where you try to figure out the free Fock-space content of the interacting vacuum.

To get the volume dependence in the mixing coefficient, just use second order perturbation theory on the Fock states. you have some sort of local interaction term, and you calculate the lowest order perturbation theory for the mixing of (say) three-particle states with the vacuum, and it involves a momentum integral over all the momenta of the internal particles. Each vertex conserves momentum (but not energy, this is old-fasioned perturbation theory) and when you impose momentum conservation with a regulator, and in a finite box (so that everything is well defined), there are more constraints than there are momenta, the integral in k-space leaves an overall $\delta(0)$ in momentum. This is actually easier to see in x-space, where you integrate over all starting positions of the vertex, and the divergent part is the remaining integral over all space due to translation invariance.

The diverging kinematic volume dependence is equally clear in vacuum bubbles modern Feynman-style relativistic perturbation theory---  if you consider a vacuum bubble as a vacuum-to-vacuum scattering process, and calculate it's amplitude, there is an additional factor of $\delta(0)$ from overall energy-momentum conservation (the vacuum bubbles conserve energy-momentum at each vertex, so imposing overall energy momentum conservation in the integral over the phase space for the nonexistent scattering adds an infrared divergent $\delta(0)$ term). This $\delta(0)$ in k-space is the volume of space-time in real space.

The same $\delta(0)$ appears in the kinematics when you calculate vacuum decay in flat space by instantons, Coleman style, or any other process which is entirely translation invariant.

This might be a dumb question, but I can't get my head around it: now I understand the true vacuum has to be orthogonal to bare vacuum, but how does this imply Haag's theorem, i.e. free and interacting representations are not unitarily related?

The unitary operator is assumed to be acting in the Fock Hilbert space, where states of infinite particle number do not exist. It doesn't imply that you can't expand the Fock Hilbert space to include cases where infinite particle number states are included, but this is in general not a separable Hilbert space, and is impossible to work with mathematically.

This is one of the reasons the path-integral is so important, in the path integral, the limit of infinite volume is completely obvious, an infinite volume Monte-Carlo simulation is the simple limit of the finite volume one, and there is nothing running away or not converging, because the finite distance correlation functions and sample-distributions converge in the path integral. In canonical methods, you have Hilbert spaces that become orthogonal in the infinite volume limit.

For a simple example, when you have a field theory with more than one vacuum, like in spontaneous symmetry breaking in a scalar theory, there is still only one vacuum when you construct the Hilbert space around any one vacuum, changing the vacuum VEV introduces infinitely many particles. But in the path-integral, you can change the VEV and it's clearly the same integral. The path-integral pastes together lots of Hilbert spaces in the case of infinite dimensional dynamics in a natural way, so that they are clearly descriptions of the same theory, and this is one reason it can be used in field theory so much more easily and clearly than superficially more sophisticated Hilbert operator methods.

Thanks, I get this now. I still have to think about your paragraph on why adiabatic switching makes things OK as long as we are only interested in scattering.

@JiaYiyang: It seems to me very evident that the bigger is $L$, the "farther" is the bare vacuum state $\sin(\pi z/L)$ from the real vacuum state $Ai(z-z_0)$ in Fig. 1 because the Airy function stays "localized" at the $z\approx 0$ due to the attracting potential $V(z)=g\cdot z$  whereas $\sin(\pi z/L)$ is stretched between $0$ and $L$, so these vacuum states become "orthogonal" in the limit $L\to\infty$.

For small $L$, when the true vacuum state is a combination of $Ai$ and $Bi$, the sine function approximates well the true vacuum state $\psi_0$ - they resemble well each other; both being sine-like because the potential has a small impact (can be neglected, roughly speaking). For small $L$ the spectral sum $\psi_0=\sum p_n\varphi_n$ converges quickly to $\psi_0$, so you do not need to sum up all the spectral terms to obtain a good precision in practical calculations.

For big $L$, on the contrary, one has to sum up all terms $\varphi_n$ to build up $\psi_0$. But Dyson's-like calculation does permit to build it up, perturbatively though, together with a perturbative treatment of the scattering potential. So, in the end you scatter real states, not bare ones. The real states live in the same (their own) Hilbert space, so the perturbation theory leads to a reasonable result despite the Haag's theorem.

The analogous thing isn't taking L to infinity in the 1d system, because then it is a power-law difference, it isn't going exponentially like the large volume limit in field theory. The correct analogy is taking an independent tensor product d times to make a d-dimensional version of your one-d example. Then the inner product vanishes as I^d where I is the inner product in one-d. This is the exponentially vanishing inner product.

I made a stupid statement in the answer that the inner product vanishes as a power of the volume, this is false (I meant that the amplitudes linking the vacuum to other states are diverging as the volume). The actual inner product of the true and bare vacuum vanishes exponentially fast.

The reason adiabatic switching works for scattering is that you adjust the Hamiltonian so that the incoming state gets dressed and the vacuum gets filled at the same time, but at all times, the one-particle states are exact eigenstates of H(t). Dyson makes a perturbation with the appropriate counterterms inside to keep the physical mass one-particle states fixed. Then the scattering happens, and you switch off the interaction. The only point of switching on and off is to have a Fock space description at infinity, and this is present in the full theory also, it is the Fock space of in/out scattering states. The in/out states don't require you to talk about switching on and off, but they are defined for infinite volume. The switching on and off doesn't really do harm either in finite volume.

@JiaYiyang: Adiabatic switching mutilates the QFT Hilbert space. It eliminates all bound state information. It allows (and forces) one to construct perturbatively an S-matrix for the Heisenberg particles.

For QCD, this violates the confinement principle according to which there shouldn't be asymptotic quark states but asymptotic meson and baryon states. However, the textbook S-matrix is for asymptotic quarks rather than asymptotic hadrons.

The informal remedy is to postulate two asymptotic levels, one in which the quark S-matrix still is meaningful (at short times), and a second one (at longer times) where the hadronic S-matrix appears through jet fragmentation or hadronization. See, e.g.,  http://en.wikipedia.org/wiki/Jet_(particle_physics). Only the second level is truly asymptotic, and only the first level can be computed by textbook (functional integral, perturbation theory) methods.

+ 3 like - 0 dislike

What distinguishes QFT from ordinary QM is that the latter has only finitely many degrees of freedom - 3 per particle (and there are a fixed number of these), while in QFT a fields generates infinitely many degrees of freedom. The canonical (anti)commutation rules that characterize an independent set of creation and annihilation operators have in QM a unique representation, while in QFT there are many of them, and only finite dof approximations are unitarily equivalent. The latter is the reason why approximate theories on a lattice are harmless - they don't suffer from UV or IR divergences, and all renormalizations are finite, i.e., numerically well-behaved. 

The problems only appear when one tries to perform the limit to an infinite number of degrees of freedom. Then the limit must be taken in such a way that the correct representation is selected, and Haag's theorem simply says that the correct representation is not the Fock representation (which would lead to finite renormalizations only). Thus the need for renormalization is equivalent with Haag's theorem. 

Who like Lubos Motl or Ron Maimon never thinks in terms of the limit sees no problems - as these are specifically related to getting the limit (mathematically) correct. Who is interested in logically stringent foundations of physics must be able to give an answer what the computed approximations are approximations to - and then Haag's theorem is one of the important things that must be accommodated. Any logically sound definition of a quantum field theory must produce the correct interacting representation.

This representation is not the Fock representation that one finds in the textbooks but a representation hidden in a logically fuzzy way behind the current renormalization techniques. No matter which approach is being used (functional integrals or lattices or causal perturbation theory, to name just the three most prominent approaches), each approach fails to define a Hilbert space with the desired interacting representation of the Poincare group. A definition would mean the ability to write down true physical states and a way to specify their inner product exactly. The evaluation of the defining formula may be numerically hard but must provably give a value, consistent with the Hilbert space properties.

To your specific questions:

(i) Causal perturbation theory doesn't work in the interaction picture but produces the correct perturbation series. The interaction picture would be embedded in causal perturbation theory if the smearing function $g(x)$ were taken to be a step function with the value 1 for x_0 in the time interval considered (no matter what the spatial part is) and 0 otherwise. But this conflicts with the demand of causal perturbation theory that $g(s)$ is smooth and has compact support. The formal adiabatic limit $g(x)\to 1$ corresponds to the limit where the time interval covers the whole time axis. In this case both approaches give formally the same result for the S-matrix. But only the causal approach avoids the problems associated with Haag's theorem. (This is not surprising since causal perturbation emanated from algebraic QFT where Haag's work was taken seriously.)

(ii) Working in the interaction picture is not really successful, one gets the naive perturbative expansion with infinite coefficients. To make it successful one has to introduce cutoffs, counterterms and cutoff-dependent couplings, which are all incompatible with an interpretation of the (limiting) commutation rules in a single Hilbert space, but can be combined in such a way that interacting matrix elements are computable in perturbation theory in terms of rules obtained starting from the interaction picture and modifying them by ad hoc renormalization rules. That this works at all has the reason in the uniqueness of the commutation rules once IR and UV cutoffs are in place (limiting the number of degrees of freedom to finitely many). That it has to be done very carefully to get correct results has its reason in the need to recover a particular interacting representation in which everything remains finite (and hence possibly well-defined even non-perturbatively) in the limit of removed cutoffs.

(3) For Yang-Mills theory, all this still applies. To avoid Haag's theorem one still needs UV and IR cutoffs, and to recover the correct limiting representation one still needs to select carefully the right renormalizations (finite renormalizations in case of lattice approximations, but this is still nontrivial). And no one has shown that the limit exists (which is part of the YM millennium problem). The only more optimistic aspect of YM theory (as opposed to $\Phi^4$ theory or QED) is asymptotic freedom, which makes it likely that the limit has a mathematically simpler structure than in the general case (where pessimists even conjecture that no limit exists).

By the way, people who are interested in more than scattering amplitudes and need a dynamics for finite times really need the Hilbert space and the right representation. The traditional functional integral doesn't give finite time information, and even the biggest lattices tractable today are far too coarse. For QCD this is largely uncharted territory; present techniques are haunted by violations of causality and by difficulties to preserve gauge invariance.

answered Aug 26, 2014 by Arnold Neumaier (15,787 points) [ revision history ]
edited Aug 28, 2014 by Arnold Neumaier
Most voted comments show all comments

"Dynamics cannot be done in imaginary time! Not even classically. You cannot solve a hyperbolic equation between two given Cauchy surfaces by solving some related elliptic equation followed by analytic continuation."

Why not? If the hyperbolic and the elliptic equations are related by analytical variation of some parameter, it is not unreasonable to expect an analytic behavior of the solution in this parameter and so solution of one equation can give solution of the other by analytic continuation. 

In the path integral formulation of QFT, it is something formally obvious (a change of variable). 

The real time of correlations functions are the analytical continuations of the Euclidean one (by the work recalled by Ron Maimon, there are some hypothesis, some technical details but that certainly does not destroy the main message). As the full dynamics of the real time QFT is in its correlation functions, the full real time dynamics of the real time QFT can be obtained from the Euclidean formulation (again, it is obvious formally in terms of path integrals). The known rigorous constructions of QFT (Glimm-Jaffe...) are done in Euclidean space and then by application of the general results on analytic continuation.

The Schwinger-Dyson equations have an obvious Euclidean version (it is the main subject of the original Schwinger's paper on the subject).

@40277: While I think everyone agrees with you in principle, the analytic continuation required to extract dynamics from Euclidean simulations is completely intractible in practice, because you have to convert decaying exponentials to oscillatory ones, and any tiny error will amplify. So the work required to convert an approximate Euclidean correlation function to a real-space dynamical solution is impossible, you just can't do the analytic continuation with reasonable resources, and it must be so, otherwise you could simulate a quantum computer with Euclidean methods and get the answer classically and you can't.

But you can solve dynamical processes like formation of bound states using Schwinger Dyson equations sometimes, and then the idea is that you get some coefficients for the local behavior from Euclidean space, and use truncated equations to find the real-time behavior. This is a Minkowski-space method. But I am not familiar with these Schwinger-Dyson equation methods, the only application I saw was here, for approximating the Gribov behavior in the propagator.
 

''Why not? If the hyperbolic and the elliptic equations are related by analytical variation of some parameter'':

Analytic continuation is possible for globally defined solutions and gives a bijection between the sets of all solutions. But it is impossible to impose the right boundary conditions if a particular solution is wanted. Thus you cannot convert a particular hyperbolic boundary value problem corresponding to some physical event into a similar problem for the elliptic counterpart since on the elliptic problem the corresponding boundary condition is completely unnatural.

If you don't believe this show me how to solve a 1+1D wave equation IVP on a triangle by means of the Laplace equation. You need complete control over all solutions of the latter to be able to impose the conditions that select the wanted solution.

Ron Maimon : "and it must be so, otherwise you could simulate a quantum computer with Euclidean methods and get the answer classically and you can't."

Could you be a bit more explicit on this remark ? I agree with the practical problem of converting decaying exponentials to oscillatory ones but I don't understand the relation with the quantum computing: do you mean there is an explicit difference in the complexity of the approximate computation of a Euclidean versus Minkowski path integral which "explains" the difference between classical versus quantum computing? 

This is now more a research program rather than a filling in of straightforward pieces. You also need to construct local gauge invariant fields and prove them local and reflection positive, before you can apply Osterwalder-Schrader reconstruction theory. 

If you succeed in all of this in a way that meets the demands of Jaffe and Witten, you really deserve the price. But I predict that carrying out the intuitive blueprint outlined above will be much harder than you presently think, as you'll meet and have to overcome all the traditional obstacles. With your distaste for rigorous math, this isn't easy. I wish you good luck.

Most recent comments show all comments

You say "no one", but you are being a little pessimistic.:

I said: ''no one has the slightest clue how to prove the necessary estimates'' and you are optimistic because ''it is completely clear what this means algorithmically when you simulate''.

But I don't see any substantial arguments for your optimism. There is a world of difference between seeing a simulation converge numerically and having rigorous estimates that prove that this has to be so for every conceivable simulation.

Only to someone who doesn't care about rigor is this difference a trivial matter, so that he can say "I see a solution". For those who care it is with Witten and Jaffe one of the hardest problems in functional analysis. 

Regarding your top comment we're in agreement. By "computationally intractable" I meant only that the energy levels of nuclei would be states of real-time QCD, and the excited states of U237 are doing an exponentially growing quantum computation that can't be simulated classically with reasonable resources. Yes, I agree you can find near-vacuum correlation functions with some appropriate truncation, but generally anything you can solve in real space with this method you can also do in Euclidean space (there might be some exceptions, but then they can be solved from the Schwinger Dyson equations truncated to some fixed order, and then these equations can be derived on the lattice).

For example, to simulate heavy ion collisions on a lattice is not impossible. One can impose a constraint of fixed Baryon number on the lattice, and find the ground state at fixed Baryon number by equilibrating. But you won't be able to figure out the deterministic properties of high-energy collisions of these states by either Euclidean or Minkowski methods, because they are horrendously complicated, and only get a statistical description through quark-gluon plasma formation.

Regarding a path toward rigorous proof, this is my own approach, but I think it's standard. I'll explain it in more detail: the key property that makes it possible to prove a phase transition is the low-dimensionality of the unstable directions in the end of RG flow. What this means can be seen in a Migdal Kadanoff transformation in a 2d Ising model. You can do the blocking exactly with nonlocal terms in the lattice action, and here you can see that the lattice action converges after block spinning very quickly, and you can also numerically determine the lattice action form.

The block-spin flow is on an infinite dimensional space, but once you prove convergence up to the relevant parameters, not just show it numerically, you are done in the reconstruction of the continuum limit, you solved it through the phase transition on the lattice in the infinite volume limit. This Kadanoff observation is what Wilson's starting point.

To prove that the large system only has a finite dimensional relevant parameter space, I think you can do by index-counting. It's equivalent to finding roots of a polynomial equation. I am sure that proving the dimensionality of the relevant parameter subspace is not difficult.

You can do the exactly analogous thing in gauge theory too, except here you need nonlocal lattice action. The exact same method is easier in fact, because there are no unstable relevant perturbations under block-spinning! The flow is exactly one-dimensional, and this is the coupling constant flow, and that's it.

So what you need to do is describe the lattice block-spinning nonlocal action using some growing asymptotic description (for the Ising model, this is easy, for lattice gauge theory, it involves both more and more plaquettes, and also more and more polynomial order on each plaquette, so the space is very annoying). Then you can establish the index properties of blocking.

This is not in the literature, but it's what you do numerically to show real-space RG works, and it's how I go about trying to prove the Clay problem. it's completely straightforward, there is no difficulty, except it's numerically annoying, the fixed points are long infinite dimensional vectors. Proving the exponential decay of the coefficients in the coarse-grained block action should not be difficult, because they are an isolated fixed point, neither should it be difficult to prove that there is only one relevant parameter.

While this is the Kadanoff-Wilson approach, and it work to make rigorous proofs, it isn't clean. So one also tries to sit down and find a clean version. The dirty part is from block-spinning, which is extremely annoying, and you want a continuous interpolation for block spinning. This continuous interpolation must exist, because it is what happens when you change the lattice coupling near zero! But it's difficult to describe how to do it explicitly.

There is no way these things will not provide a rigorous proof of what the Clay people want, because this is why everyone believes it in physics. The problems in making it rigorous have nothing to do with C* algebras and Hilbert spaces, but in analyzing block spinning and it's analogs.

I have a few things I use for this which are not standard, but I don't claim to have worked it out. I also don't think it is very difficult work, or very interesting work, it is simply tedious work in establishing things that everyone already knows, so I hate to think about it. But I have been thinking about it in the last week, due to this discussion.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsO$\varnothing$erflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...