Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

206 submissions , 164 unreviewed
5,103 questions , 2,249 unanswered
5,355 answers , 22,794 comments
1,470 users with positive rep
820 active unimported users
More ...

  Could the full theory of quantum gravity just be a nonrenormalizable quantum field theory?

+ 10 like - 0 dislike
5887 views

This may be more of a philosophical question than a physics question, but here goes. The standard line is that nonrenormalizable QFT's aren't predictive because you need to specify an infinite number of couplings/counterterms. But strictly speaking, this is only true if you want your theory to be predictive at all energy scales. As long as you only consider processes below certain energy scales, it's fine to truncate your Lagrangian after a finite number of interaction terms (or stop your Feynman expansion at some finite skeleton vertex order) and treat your theory as an effective theory. Indeed, our two most precise theories of physics - general relativity and the Standard Model - are essentially effective theories that only work well in certain regimes (although not quite in the technical sense described above).

As physicists, we're philosophically predisposed to believe that there is a single fundamental theory, that requires a finite amount of information to fully specify, which describes processes at all energy scales. But one could imagine the possibility that quantum gravity is simply described by a QFT with an infinite number of counterterms, and the higher-energy the process you want to consider, the more counterterms you need to include. If this were the case, then no one would ever be able to confidently predict the result of an experiment at arbitrarily high energy. But the theory would still be completely predictive below certain energy scales - if you wanted to study the physics at a given scale, you'd just need to experimentally measure the value of the relevant counterterms once, and then you'd always be able to predict the physics at that scale and below. So we'd be able to predict that physics at arbitrarily high energies that we would have experimental access to, regardless of how technologically advanced our experiments were at the time.

Such a scenario would admittedly be highly unsatisfying from a philosophical perspective, but is there any physical argument against it?

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
asked Dec 20, 2016 in Theoretical Physics by tparker (305 points) [ no revision ]
retagged Sep 16, 2017
Closely related: physics.stackexchange.com/q/295346/50583

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user ACuriousMind
@ACuriousMind I think my question is getting at something at something different from that one. I understand how effective theories work when you have an energy cutoff above which new physics emerges. I'm asking whether it's possible that no (qualitatively) new physics ever emerges above your cutoff, and so your infinite series of nested effective theories is, in fact, the final theory.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
What you say sounds an awful lot like how we can use Newtonian physics in cases where we don't have access to a fast enough (or small enough) test object to realize relativistic or quantum effects.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Cort Ammon
My apologies: I stole a line from your post above for my much more naive question : physics.stackexchange.com/q/299729 I hope you don't mind, it's much better expressed that I could manage.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user user139561
Question: by a nonrenormalizable QFT you mean a model defined by the Lagrangian on the background Minkowski spacetime? Minkowski spacetime is a solution of classical Einstein equations and is probably drastically modified by a quantum gravity theory. Quantum gravity spacetime is expected to be highly fluctuating on the microscopic scale.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Solenodon Paradoxus
Quantizing spacetime does not work because spacetime is not continuous in spacelike direction. Just try to assign a Lorentz factor (t/ $\tau$) to a vacuum point between particles, you will see that vacuum is timeless (time is not defined).

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Moonraker
See C.P. Burgess, Quantum Gravity in Everyday Life: General Relativity as an Effective Field Theory, Living Reviews in Relativity 7 (2004), 5. livingreviews.org/lrr-2004-5

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Arnold Neumaier
@Moonraker People could mean different things by "quantum spacetime". For example, spinfoam models are plausible candidates for quantum spacetime. I don't see how your "Lorentz factor" argument has to do with anything. Try applying it to spinfoam models, which are known to be self-consistent.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Solenodon Paradoxus
@Solenodon Paradoxus, At my knowledge, spin foam theories use also spacelike hypersurfaces, see Wikipedia: "In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface."

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Moonraker
@Moonraker not exactly. Spinfoam models (like covariant LQG) use generalized quantum states associated to boundaries of spinfoams. These aren't the usual quantum states in the sense they are given in quantum mechanics, because they don't encode information about a special instant of time, but rather about a 3-dimensional (not spacelike!) boundary of a certain spacetime region. The full quantum dynamics is given through the sum over spinfoams bounded by the spin network. Please familiarize yourself with "boundary formalism" (if you haven't already).

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Solenodon Paradoxus

I would avoid this "certainty" while asking about "the full theory of quantum gravity", because there may be many ones ;-)

4 Answers

+ 6 like - 0 dislike

The problem with GR+QM is that the counterterms include higher derivatives terms, $$ \mathcal L_\mathrm{ctr}\sim \partial^4h $$ where $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$.

Therefore, on accounts of the Ostrogradsky instability theorem, the system is unstable. This means that the whole program of perturbation theory makes little sense, and there is no reason for us to expect that the perturbative expansion has anything to do with what the theory is really telling us.

Therefore, QGR's perturbative expansion need not reflect what the non-perturbative theory is about. We just don't know what to do with the theory, we only know that perturbation theory cannot work.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user AccidentalFourierTransform
answered Dec 20, 2016 by AccidentalFourierTransform (480 points) [ no revision ]
no reason for us to expect that the perturbative expansion has nothing to do with... i.e. there's a reason to expect that it has something to do with it? Double negation is confusing, especially in English...

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Ruslan
@Ruslan ah, yes, thanks

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user AccidentalFourierTransform
Interesting! But I've heard many people emphasize that the naive quantization of GR works fine as an effective theory at low energies (i.e. far below the Planck energy $G^{-1/2}$) - how do you reconcile this with your result?

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
@tparker it very much depends on what you mean by works. I really recommend you to read How Far Are We from the Quantum Theory of Gravity? for an overview of the problems of QG. It is very pedagogic and yet insightful.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user AccidentalFourierTransform
Good point. I often hear claims of quantum gravity being plagued with infinities and doomed, but this is only true in the context of perturbation theory.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Solenodon Paradoxus
+ 5 like - 0 dislike

I think the reason people don't like this idea is that to keep the same physics at lower energy scales the coefficients of the nonrenormalizable terms should grow as you define the theory at higher and higher energy scales $\Lambda$. This is the flip side of being an 'irrelevant' interaction. They might become infinite at a finite value of $\Lambda$, in other words a Landau pole. In that case you could not define the theory at higher energy scales while keeping the same physics you already have at lower energy scales.

If they do not become infinite but in fact converge to a finite value, an ultraviolet fixed point, then the theory really is well defined. This is actually a serious proposal for quantum gravity called asymptotic safety.

Basically you can keep pushing the effective theory and either you start to see new physics, or it works all the way and you have asymptotic safety. The option where it really does break down at some finite $\Lambda$ would lead to people studying the regulator. Is it a lattice theory or what? That would really be new physics too.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user octonion
answered Dec 20, 2016 by octonion (145 points) [ no revision ]
I agree that the existence of a Landau pole at finite energy would invalidate my proposal. And asymptotic safety would make things rather straightforward. But isn't there also a third option, where the coupling constants flow to unboundedly large values at high energy, but don't ever diverge at any finite energy scale? In that case, the theory would also be well-defined, although of course you couldn't use perturbation theory to study it at high energy.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
+ 5 like - 0 dislike

You suggest that we can use a nonrenormalizible theory (NR) at energies greater than the cutoff, by meausuring sufficiently many coefficients at any energy.

However, a general expansion of an amplitude for a NR that breaks down at a scale $M$ reads $$ A(E) = A^0(E) \sum c_n \left (\frac{E}{M}\right)^n $$ I assumed that the amplitude was characterized by a single energy scale $E $. Thus at any energy $E\ge M$, we cannot calculate amplitudes from a finite subset of the unknown coefficients.

On the other hand, we could have an infinite stack of (NR) effective theories (EFTs). The new fields introduced in each EFT could successively raise the cutoff. In practice, however, this is nothing other than discovering new physics at higher energies and describing it with QFT. That's what we've been doing at colliders for decades.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user innisfree
answered Dec 20, 2016 by innisfree (295 points) [ no revision ]
I am not suggesting "that we can use a nonrenormalizible theory (NR) at energies greater than the cutoff, by meausuring sufficiently many coefficients at any energy." I emphasize in my question that we would need our experiments to go to higher and higher energies to measure the values of higher and higher counterterms.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
The "infinite stack of EFT"s idea is what I am suggesting. But the point is that I am proposing that the higher-energy EFT's would not contain any "new fields" or qualitatively "new physics" - just more coupling constants for higher-order graviton interactions. Under my proposal, no new particles (involving gravity) would ever be discovered.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
I don't understand what you mean. As you approach $E\lesssim M$, you need to know/measure more and more coefficients to make predictions. But once you surpass $E \ge M$, you need infinitely many, as all terms in the perturbation expansion are big (and in fact perturbation theory breaks).

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user innisfree
In my infinite stack of EFTs, massive fields are integrated out at each threshold, and each EFT includes all terms (R and NR) consistent with symmetries. Thus two EFTs with the same field content and symmetries are identical, and cannot differ by e.g. 'more coupling constants for higher-order graviton interactions'.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user innisfree
That's the problem - above the cutoff of the EFT, all the high-order NR interactions become important at once. It isn't the case that you raise $E\gtrsim M$ and one NR interaction is important, raise it a bit more 2 NR interactions are important etc.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user innisfree
Ah, I understand. You last comment clears everything up for me. Thanks!

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user tparker
+ 1 like - 1 dislike

If one interprets "physically" like positivists, there can be no physical argument against it - if the theory predicts the observable facts, everything is fine, and the number of epicycles does not matter. A preference for a theory with less parameters goes already beyond positivism, it has to rely on Popperian (philosophical) empirical content (predictive power).

In real physics, this is what the experimenters have to do - observe more "epicycles", more of the lowest order terms. And it is the job of the theoreticians to try to find a new theory which allows to get rid of all the epicycles already found.

What one has to expect is that trying to find new epicycles will fail beyond a critical length. Think about some lattice regularization as a typical theory where the continous approximation, similar to atomic theory, fails below the critical distance. If what you can "see" is, say, 1000 times the critical length, anything will look yet smooth and you will succeed with a few lowest order terms of an effective theory. At 100 times you will already need much more terms, at 10 times it will start to fail completely, and at the critical length itself even the lowest 10000 terms will not save the game.

So, roughly, if you need one more term to explain the observations, expect that you are already near the critical length where everything will fail, and you need a new theory.

In some sense, gravity is simply that first nontrivial (non-renormalizable) lowest order term, and the Planck length is the corresponding prediction where we see that it cannot be ignored anymore in QFT computations. And where we have, therefore, to expect that the theory starts to fail completely too.

This post imported from StackExchange Physics at 2017-09-16 17:56 (UTC), posted by SE-user Schmelzer
answered Apr 22, 2017 by Schmelzer (0 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$y$\varnothing$icsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...