Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,354 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Are these two aspects of traditional renormalization only technical conveniences in light of our modern understanding?

+ 6 like - 0 dislike
5038 views

1. Sending cutoff to infinity:In modern perspective, is it enough to just set cutoff to be a large but definite scale, while imposing renormalization conditions? And we know sending cutoff to infinity can't always be done (even at perturbative level?) if there is a Landau pole. So sending cutoff to infinity is probably just for calculational convenience?

2. Renormalizability in the sense that quantum corrections do not generate new counter-terms: We would like to have such renormalizability because such theories are qualitatively correct at tree level, so that we have good control of the theories just by inspecting their Lagrangians?

The above have been in my head for a while and they seem to make sense. However I haven't found materials explicitly phrasing it this way, so I put it here to check if I am having any misunderstanding.

asked Sep 27, 2014 in Theoretical Physics by Jia Yiyang (2,640 points) [ no revision ]

Sending cutoff to infinity is not allowable anymore and renormalizability requirement is forbidden too. New counter-terms with their cutoff dependencies might be necessary to fit our theories to experiments because we do not know physics. We must keep our options maximally open.

The explanation is that the effective theory is only valid at a certain range of scales and when you go beyond towards higher energies for example, new operators (terms) can become relevant as there is a new fixed point.

1 Answer

+ 3 like - 0 dislike

In a renormalizable theory (with or without Landau pole), the perturbative results change only by $O(\Lambda^{-1})$ when you move the cutoff to $\infty$ and adjust the finitely many counterterms accordingly. Thus one could always work at some finite $\Lambda$ and stay within the experimental bounds.

However, Poincare invariance (and in not gauge invariant renormalization schemes also gauge invariance) is then violated by $O(\Lambda^{-1})$, too. This means that $\Lambda$ has to be large enough to respect the very stringent experimental bounds on a violation of Poincare invariance. 

Thus the limit $\Lambda\to\infty$ is primarily needed to have exact symmetries. In a sense, this means that it is needed only for theoretical reasons of elegance, since if Poincare invariance were broken it would call for an explanation why it is so extremely well satisfied. 

Nonrenormalizable theories show precisely the same behavior, except that infinitely many counterterms are needed to get a finite limit as $\Lambda\to\infty$. This is not a disaster for predictability as all but a few of these counterterms are suppressed by high powers of the mass scale of the theory (such as the Planck mass for gravity).

All this is valid on the perturbative level.

Nonperturbatively things could go wrong if there is a Landau pole, but the existence of a Landau pole for $\Phi^4$ theory or QED is proved only in low order perturbation theory. Hence in fact nothing mathematically convincing is known about the obstructions to nonperturbative QFTs (whose observable fields satisfy the Wightman axioms).

My personal belief is that QED (or at least a variant of QED that contains nuclei in addition to electrons, with simplified interactions and conserved nucleon number - i.e., excluding radioactivity) must exist since it is the most accurate theory that we have.

answered Oct 4, 2014 by Arnold Neumaier (15,787 points) [ no revision ]

Thanks! To clarify my thought on my 2nd question, I guess it can be formulated this way: if non-renormalizable interactions that keep generating new counter terms are perfectly legitimate, what is the value of t'Hooft and Veltman's work on renormalizabilty of Yang-Mills? Trying to explain this to myself led to the thought I wrote down in the main post. 

From the newer EFT point of view as I understand it, not being able to send $\Lambda \rightarrow \infty$ is no longer that bad as it just means that a new EFT (with potentially different (un)broken symmetries too) kicks in at high energy scales.

There is a video (a little bit long, but interesting) about ups and downs in QFT understanding and developments. It is a talk given by S. Weinberg at CERN. I give a reference to a clip.

@VladimirKalitvianski, Interesting talk, I just watched it through. Thank you VK!

Renormalizability means very few constants that need to be fit to experiment, and hence great predictivity at any accuracy. (Of course, whether experiment agrees to that accuracy is a different question.) Nonrenormalizable theories need more and more constants (and, consequently, more messy computations) as one increases the desired accuracy. Thus renormalizability is always a significant advantage.

However, at the time  t'Hooft and Veltman proved renormalization of nonabelian gauge theories, renormalization was considered an essential requirement for consistency. Proving it was the breakthrough that made QCD a standard rather than just a possibility.

Only later, Weinberg made the nonrenormalizable case respectable again.

Yes, but I think the more difficult part of t'hooft's work is to prove QCD is more than power-counting renormalizable. It seems a lot of earlier effort was to show specific theories are not only power-counting renormalizable, but also renormalizable(with no new counter terms generated). I meant to compare the meanings of these two kinds of renormalizablity in modern perspective, and I figured the advantage of latter kind is only that the theory will be qualitatively more correct at tree level. So it still has a bit more than just historical value.   

I meant with renormalizability not power counting (which is only a necessary condition) but the absence of infinities in the higher order contributions when one removes the cutoff. This is nontrivial, and was in the setting before causal perturbation theory even hard for $\Phi^4$ theory. That they proved it for QCD was indeed remarkable. (That QCD has a favorable power counting is obvious.)

But at tree level, there is no difference. The difference is that in higher orders, one doesn't get additional renormalization ambiguities, that would arise when one had to remove additional infinities.

I never claimed that their work has only historical value. In my answer I had mentioned the advantages of being renormalizable. But it is not better accuracy but well-definedness to all orders with a few parameters only.

After second thought, you are probably right. I had this wrong way of thinking: Yukawa interaction without $\phi^4$ is nonrenormalizable, and qualitatively wrong at tree level because it does not show $\phi\phi\to\phi\phi$ scattering at all. Then I realized one can say completely the same thing about QED two-photon scattering, but QED is renormalizable. +1 to your comment.

At a tree level QED is not so "correct" since it misses soft radiation.

@jiaYiyang: Yukawa interactions without bare $\phi^4$ still has a Fermion box diagram with four scalars and Fermions going in a loop which gives a logarithmically growing scattering to 2 $\phi$ particles to 2 $\phi$ particles. Even if it at some subtraction point this gives zero $\lambda$ for the $\phi^4$ interaction, this condition is not preserved under running, and if you change the scale of subtraction to a very high one, you always find a large $\lambda$ at this subtraction scale.

@RonMaimon, yes, that's why I said I thought it was not qualitatively correct at tree level. But then the QED example shows it is probably not a useful way of looking at it.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOv$\varnothing$rflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...