Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Why should the Standard Model be renormalizable?

+ 9 like - 0 dislike
5527 views

Effective theories like Little Higgs models or Nambu-Jona-Lasinio model are non-renormalizable and there is no problem with it, since an effective theory does not need to be renormalizable. These theories are valid until a scale $\Lambda$ (ultraviolet cutoff), after this scale an effective theory need a UV completion, a more "general" theory.

The Standard Model, as an effective theory of particle physics, needs a more general theory that addresses the phenomena beyond the Standard Model (an UV completion?). So, why should the Standard Model be renormalizable?

This post imported from StackExchange Physics at 2014-08-19 17:29 (UCT), posted by SE-user Leandro Seixas
asked Jan 29, 2011 in Theoretical Physics by Leandro Seixas (85 points) [ no revision ]
retagged Aug 19, 2014
Some answers are telling that the SM is not renormalizable. This can be said in different senses, so it would be useful if the answers go deeper in this particular point.

This post imported from StackExchange Physics at 2014-08-19 17:29 (UCT), posted by SE-user arivero
@arivero: I think it needs to be stated clearly that non-renormalisable terms will flow to ever (exponentially) smaller terms upon infra-red rescaling?

This post imported from StackExchange Physics at 2014-08-19 17:29 (UCT), posted by SE-user genneth

5 Answers

+ 9 like - 0 dislike

the Standard Model just happens to be perturbatively renormalizable which is an advantage, as I will discuss later; non-perturbatively, one would find out that the Higgs self-interaction and/or the hypercharge $U(1)$ interaction would be getting stronger at higher energies and they would run into inconsistencies such as the Landau poles at extremely high, trans-Planckian energy scales.

But the models where the Higgs scalar is replaced by a more convoluted mechanism are not renormalizable. That's not a lethal problem because the theory may still be used as a valid effective theory. And effective theories can be non-renormalizable - they have no reason not to be.

The reason why physicists prefer renormalizable field theories is that they are more predictive. A renormalizable field theory's predictions only depend on a finite number of low-energy parameters that may be determined by a comparison with the experiments. Because with a fixed value of the low-energy parameters such as the couplings and masses, a renormalizable theory may be uniquely extrapolated to arbitrarily high scales (and it remains predictive at arbitrarily high scales), it also means that if we postulate that the new physics only occurs at some extremely high cutoff scale $\Lambda$, all effects of the new physics are suppressed by positive powers of $1/\Lambda$.

This assumption makes the life controllable and it's been true in the case of QED. However, nothing guarantees that the we "immediately" get the right description that is valid to an arbitrarily high energy scale. By studying particle physics at ever higher energy scales, we may equally well unmask just another layer of the onion that would break down at slightly higher energies and needs to be fixed by another layer.

My personal guess is that it is more likely than not that any important extra fields or couplings we identify at low energies are inherently described by a renormalizable field theory, indeed. That's because of the following reason: if we find a valid effective description at energy scale $E_1$ that happens to be non-renormalizable, it breaks down at a slightly higher energy scale $E_2$ where new physics completes it and fixes the problem. However, this scenario implies that $E_1$ and $E_2$ have to be pretty close to one another. On the other hand, they must be "far" because we only managed to uncover physics at the lower, $E_1$ energy scale.

The little Higgs models serve as a good example how this argument is avoided. They adjust things - by using several gauge groups etc. - to separate the scales $E_1$ and $E_2$ so that they only describe what's happening at $E_1$ but they may ignore what's happening at $E_2$ which fixes the problems at $E_1$. I find this trick as a form of tuning that is exactly as undesirable as the "little hierarchy problem" that was an important motivation of these models in the first place.

The history has a mixed record: QED remained essentially renormalizable. The electroweak theory may be completed, step-by-step, to a renormalizable theory (e.g. by the tree unitarity arguments). The QCD is renormalizable, too. However, it's important to mention that the weak interactions used to be described by the Fermi-Gell-Mann-Feynman four-fermion interactions which was non-renormalizable. The separation of scales $E_1$ and $E_2$ in my argument above occurs because particles such as neutrons - which beta-decay - are still much lighter than the W-bosons that were later found to underlie the four-fermion interactions. This separation guaranteed that the W-bosons were found decades after the four-fermion interaction. And this separation ultimately depends on the up- and down-quark Yukawa couplings' being much smaller than one. If the world were "really natural", such hierarchies of the couplings would become almost impossible. My argument would hold and almost all valid theories that people would uncover by raising the energy scale would be renormalizable.

General relativity is a big example on the non-renormalizable side and it will remain so because the right theory describing quantum gravity is not and cannot be a local quantum field theory according to the old definitions. As one approaches the Planck scale, the importance of non-renormalizable effective field theories clearly increases because there is no reason why they should be valid to too much higher energy scales - at the Planck scale, they're superseded by the non-field quantum theory of gravity.

All the best, LM

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Luboš Motl
answered Jan 30, 2011 by Luboš Motl (10,278 points) [ no revision ]
+ 8 like - 0 dislike

The short answer is that it doesn't have to be, and it probably isn't. The modern way to understanding any quantum field theory is as an effective field theory. The theory includes all renormalizable (relevant and marginal) operators, which give the largest contribution to any low energy process. When you are interested in either high precision or high energy processes, you have to systematically include non-renormalizable terms as well, which come from some more complete theory.

Back in the days when the standard model was constructed, people did not have a good appreciation of effective field theories, and thus renormalizability was imposed as a deep and not completely understood principle. This is one of the difficulties in studying QFT, it has a long history including ideas that were superseded (plenty of other examples: relativistic wave equations, second quantization, and a whole bunch of misconceptions about the meaning of renormalization). But now we know that any QFT, including the standard model, is expected to have these higher dimensional operators. By measuring their effects you get some clue what is the high energy scale in which the standard model breaks down. So far, it looks like a really high scale.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user user566
answered Jan 29, 2011 by user566 (545 points) [ no revision ]
I agree that it doesn't have to be, but am a bit confused by the "isn't" part of this answer. Certainly the SM in the conventional form includes only renormalizable interactions of dimension four or less, no? And while there are neutrino masses which could be described by nonrenormalizable interactions, they can also be described by renormalizable interactions in a simple extension of the SM by introducing right-handed neutrinos, a SM singlet Higgs field and using the seesaw mechanism. So isn't it an overstatement to say the SM is definitely not renormalizable, or am I missing something?

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user pho
Jeff, good point, I edited it to the weaker "probably isn't".

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user user566
-1 --- The standard model is certainly renormalizable, 'tHooft proved it. I have spoken to some people who lived through the era, and at least a few of them understood very well that if they find a renormalizable theory, then it would be valid up to essentially arbitrarily high scales, perhaps up to the Planck length. This was a major motivation for finding a renormalizable theory.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Ron Maimon
@Ron Maimon: I thought that the bigger problem is super-renormalisability? So things like the Higgs mass is no good because it would require fine-tuning; thus a UV/Planck scale completion, whilst theoretically possible, would require some fantastic cancellations which are unnatural.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user genneth
@genneth: super-renormalizability means fine-tuning of the Higgs mass. That's the only fine tuning in the non-supersymmetric standard model, and there are ways to avoid it. It is probably best to wait a few months until we have data on the Higgs sector before speculating. It might as simple as a new strong gauge field whose confinement scale is a TeV. It can also be supersymmetry, and that would be theoretically more interesting. We'll know from the LHC. But one fine tuning is not a serious problem, especially given the known fixes.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Ron Maimon

Nice answer.

+ 6 like - 0 dislike

Because we happen to be working at the right energy scale. In general, if there are renormalizable interactions around, they dominate over nonrenormalizable ones, by simple scaling arguments and dimensional analysis. Before the electroweak theory was developed, the Fermi theory of the weak interactions was nonrenormalizable because the leading interactions people saw were beta decay, which is a dimension-6 operator, and the structure of the W and Z bosons hadn't been uncovered yet. Now we're at high enough energy that we've seen the underlying renormalizable interactions responsible for this.

If there are other heavy particles we don't know about, there certainly will be higher dimension operators around that we should add to the Standard Model. The fact that we haven't seen their effects yet is something that worries those of us who are hoping for discoveries soon....

Edit: I should add that, since we know neutrinos have mass, the Standard Model isn't really a renormalizable theory anymore. Not that this is relevant for most of particle physics most of the time.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Matt Reece
answered Jan 29, 2011 by Matt Reece (1,630 points) [ no revision ]
The standard model is a well defined mathematical object--- it's a theory without neutrino masses. It may be wrong experimentally, but its renormalizable.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Ron Maimon
Eh. I'm pretty sure whatever effective field theory replaces the Standard Model and incorporates neutrino masses is going to be called the Standard Model. That's just human language.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user user1504
+ 5 like - 0 dislike

The standard model is renormalizable because of the enormous gap in energy between the scale of accelerator physics and the scale of Planck/GUT physics. That this gap is real is attested to by the smallness of all non-renormalizable corrections to the standard model.

  • Neutrino masses: these are dimension 5, so they are very sensitive to new physics. The measured masses are consistent with a GUT scale supression of non-renormalizable terms, and rule out large extra dimensions immediately.
  • Strong CP: The strong interactions are CP invariant only because there are no nonrenormalizable interactions of quarks and gluons, or direct quark-quark-lepton-lepton interactions. Even the renormalizable theta-angle leads to strong CP.
  • Proton decay: If the standard model fails at a low scale, the proton will decay. The decay of the proton is impossible to supress completely because it is required by standard model anomaly cancellation, so you have to allow the SU(2) instanton to link quarks and leptons for sure. If you try to make a theory with large extra dimensions, you can do some tricks to suppress proton decay, but they require SU(2) and U(1) couplings to start to run like crazy below a TeV.

These observed facts mean that there a real desert between a TeV and the GUT scale. There are also these much weaker constraints, which are enough to rule out TeV scale non-renormlizability:

  • Muon magnetic moment: the scale of the observed anomalies are those expected from extra charged particles, not from a fundamental muon pauli term. If the non-renormalizability scale were a TeV, the Pauli term would be much larger than experimental error without some fine-tuning.
  • Flavor-changing neutral currents: these also require some fine tuning to make work with a low non-renormalizability scale, but I don't know how these work very well, so I will defer to the literature.

Around 2000, incompetent physicists started to argue that this is a small number of problems, and that really, we don't know anything at all. In fact, the reason theorists broke their heads to find a renormalizable model is because they knew that such a model would be essentially accurate to arbitrarily large energies, and would be a real clue to the Planck scale.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user Ron Maimon
answered Sep 9, 2011 by Ron Maimon (7,730 points) [ no revision ]
+ 0 like - 0 dislike

The standard model need not be renormalizable. It's completion should be.

This post imported from StackExchange Physics at 2014-08-19 17:30 (UCT), posted by SE-user WIMP
answered Jan 30, 2011 by WIMP (150 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOv$\varnothing$rflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...