Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Why regularization?

+ 6 like - 0 dislike
3838 views

In quantum field theory when dealing with divergent integrals, particularly in calculating corrections to scattering amplitudes, what is often done to render the integrals convergent is to add a regulator, which is some parameter $\Lambda$ which becomes the upper limit of the integrals instead of $\infty$.

The physical explanation of this is that the quantum field theory is just an approximation of the "true" theory (if one exists), and it is only valid at low energies. We are ignorant of the processes that occur at high energies, and so it makes no sense to extend the theory to that regime. So we cut off the integral to include the low energy processes we are familiar with only.

My problem with this explanation is, even if we don't know what happens at high energies, is it okay to just leave those phenomena out of our calculations? In effect we are rewriting our integral as $\int_{-\infty}^{\infty} = \int_{-\infty}^{\Lambda} + \int_{\Lambda}^{\infty}$, and then ignoring the second term. Is this really okay? And even if we were able to ascertain the "true" theory and calculated the contribution to the correction from high energy processes, I feel like it would come out to be large, since looking at it from our effective field theory viewpoint it already appears to be infinity. Do we have any right to say it is negligible?

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Arun Nanduri
asked Dec 2, 2011 in Theoretical Physics by Arun Nanduri (30 points) [ no revision ]
retagged Jun 6, 2014

7 Answers

+ 8 like - 0 dislike

The reason is that the continuum involves a notion of limit where you get an infinity of points in every little region. This is different from the potential infinity of say, sentences, where the sentence can get longer and longer and then you get an infinite number of sentences, but only for very long sentences which are further away from the realizable ones. In this case, in every box, you have an infinite number of distinct points, each of which is as accessible as any other (at least in a naive view of the continuum).

When you are doing mathematics, you are describing a continuous object with a string of symbols. This means that the only way to define quantities on a continuum is to define them on some approximate notion of a continuum, and then take the limit that the approximation becomes dense. When you first defined real numbers, in grade school, you defined numbers with a finite number of decimals, or perhaps rational numbers, and then abstracted the notion of real numbers as infinite sequences of decimals, or as limiting values of Cauchy sequences of rationals. In either case, you have a discrete structure with only a potential infinity, and the real numbers emerge when you take the continuum limit, either by allowing the decimals to grow arbitrarily long, or by allowing the denominators of the rational number to grow arbitrarily large.

The reason we don't sweat this is because we are intuitively familiar with geometry, so we have an immediate understanding that this process makes sense. But it is far from intuitive that the real numbers make sense. In quantum field theory, one has to deal squarely with the fact that the real numbers are actually a sophisticated idea, since you are defining quantum fluctuations in fields, which have separate degrees of freedom at every one of continuously many points. There is a cutoff at every energy, since exciting the short-distance modes requires high energy, so physically, the behavior of fields at low energies should not depend on the high-energy field modes. But if this is true, then you should be able to define the field theory as a continuum theory, as a limit as continuous space emerges from a discrete structure.

When formulating quantum field theory, you start with a regularization because that's the way every continuum theory is defined, it's a limit. This is no different in principle than defining a differential equation. If somebody asked you "what does $\dot{x} = \alpha \sqrt{x} $ mean?" You would have to say that the limit of the next value of x, minus the current value of x, is the time increment times the $\sqrt{x}$, in the limit that the stepsize is small. We write it as a differential equation without a stepsize, because the limit makes sense. The definition of derivative as a quotient is designed to ensure that this is so, you divide by the step-size to the first power, because this is how the increment of a differentiable function scales with step-size.

If you are doing stochastic calculus, random walks on continuous time, the increment of distance you move scales as the square root of the stepsize. This means that the ordinary notion of derivative diverges when you take the small step-size limit. The definition of stochastic calculus defines derivatives of random walks as a distribution, so that only its integral makes sense, not its value at one point, and you get some funny commutation relations, like

$$ x(t+\epsilon)\dot{x}(t) - x(t)\dot{x}(t) = 1$$

where the equality is understood as a distribution identity, it is saying that the integral of the randomly fluctuating quantity on the left over any interval is the same as the integral of 1 over the same interval, and the fluctuations are zero over a finite size interval in the limit of small steps. This is the stochastic version of the Heisenberg commutation relation in quantum mechanics.

For quantum fields, you have to analogously make an approximation to the continuum, say a lattice with step-size $\epsilon$. If the results make sense as field theory, if they have a continuum limit, they don't depend on the step-size (the inverse cutoff) you choose, as long as it is small. The only difference is that the scaling laws for the parameters is different from ordinary calculus (or from stochastic calculus).

To see explicitly how a lattice limit produces a continuous field theory, you can consider the case of the Ising model. If you make the lattice small while simultaneously bringing the temperature closer to the critical temperature, so that the correlation length stays fixed as the lattice shrinks, you end up with long-ranged fluctuations in the spin described by a continuous field. The field is the number of up spins minus the number of down spins in a ball containing many lattice sites, where the ball size shrinks relative to the correlation length, but grows relative to the lattice spacing. You rescale the field by the lattice spacing to a certain power and the ball-size to a certain power (you choose the powers to get a finite limit in the small $\epsilon$ limit, independent of the ball size), and then you have defined a field theory. In this case, it is a scalar field theory with quartic self interactions, and in three dimensions or in two, it converges to a sensible unique limit which depends only on the correlation length (the coupling flows to a fixed point in the long-distance theory). In five dimensions and above, the theory converges to a free field theory. In four dimensions, it converges to a free field theory, but very very slowly, the coupling only goes to zero as the log of the lattice spacing, and if you see a scalar quartically interacting field in nature with a nonzero interaction, you can conclude that the cutoff scale is above the lattice size which would make the coupling smaller than what you observe.

I gave a qualitative description because you asked for this, but the full rigorous description of the limiting process is not yet fully worked out, although it is known heuristically for most cases of interest. This is an important open problem in mathematical physics.

answered Jul 16, 2012 by Ron Maimon (7,730 points) [ revision history ]
edited Aug 5, 2014 by Ron Maimon
"When formulating quantum field theory, you start with a regularization because that's the way every continuum theory is defined, it's a limit. This is no different in principle than defining a differential equation... ." I like how you explain that behind the operations of derivatives in theories (of physics) lies the idea of a scheme to compute it. But what if one takes derivations as to be evaluated symbolically. Then no approximation scheme can be prefered other. There can be no comm. relation. Can I not interpret a diff-eq. as to be solved and evaluated with proper algebraic methods only?

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user NikolajK
The approximations for symbolic differentiation are no different--- you can represent a function as a sum of wavelets, and differentiate "symbolically", it's no different. The analytic representation is a text string, and doesn't represent it any less discretely than a bunch of doubles on a grid, except it may be a more efficient countable structure. You don't prefer different approximation scheme, you define the derivative as the thing they have in common--- that's the limit. Same in field theory, the field theory is that universal limiting thing different approximations have in common.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Ron Maimon
Okay thanks. I actually came to the idea that both realizations of the derivative (with finite differences or algebraic) are, in the end, similarly symbolic operations.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user NikolajK
+ 6 like - 0 dislike

I'm not sure about it, but my understanding of this is that the $\int_\Lambda^\infty$ term is essentially constant between different processes, because whatever physics happens at high energies should not be affected by the low-energy processes we are able to control. That way, we can meaningfully calculate differences between two integrals, and the high-energy portions cancel out. This will be the case regardless of whether the high-energy contributions are bounded (as they should be in a true theory) or unbounded (as QFT calculations seem to indicate).

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user David Z
answered Dec 2, 2011 by David Z (660 points) [ no revision ]
But are we calculating differences when we find scattering amplitudes? We are just summing Feynman diagrams.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Arun Nanduri
+ 6 like - 0 dislike

I'll just adress that point:

... $\int_{-\infty}^{\infty} = \int_{-\infty}^{\Lambda} + \int_{\Lambda}^{\infty}$, and then ignoring the second term. Is this really okay?

The integrals we are dealing with look like this one: $$\int\frac{f(p_\mu)}{g(p_\mu)}d^4p$$ Or, more explicitly: $$ \int^{+\infty}_{-\infty}dp_x\int^{+\infty}_{-\infty}dp_y\int^{+\infty}_{-\infty}dp_z\int^{+\infty}_{-\infty}dE\,\,\frac{f(E,p_x,p_y,p_z)}{g(E,p_x,p_y,p_z)} $$ And then you just use 4D spherical coordinates: $$\int\frac{f(p_\mu)}{g(p_\mu)}d^4p = \int d\Omega_4 \int_{0}^{\infty} d{p_{eu}}\frac{f(p_\mu)}{g(p_\mu)}$$ Where $p_{eu}^2 = E^2 + p_x^2+p_y^2+p_z^2$ is the Euclidean norm of the vector, and $\Omega_4$ is a 4D solid angle.

Short digression: why Euclidean norm? We are supposed to deal with Minkowski space, do we? The crucial point here is the use of the Wick rotation: the time-components (in that case the energy $E$) of your vectors are considered to be complex numbers, the integration is performed along the imaginary axis and the result is analytically continued to back to the real values of the energy.

So, it is the last integral that we are usually cutting off: $$\int_0^\infty dp_{eu} \to \int_0^\Lambda dp_{eu}$$ and that procedure is not exactly the same as cutting every integrals in Cartesian coordinates at $\pm\infty$. But the results must be the same -- the difference if just either you cut large contributions with a sphere or with a box.

Slightly more important point is that more detailed study of the analytic properties of the integrals we are dealing with in QFT shows, that the "nature" of the UV divergences is always Euclidean -- it "comes" from large values of $E^2 + p_x^2+p_y^2+p_z^2$ -- not just "large energy". So, the use of this coordinates is rather natural.

Last but not least -- this coordinates lead you to the idea if dimensional regularization, which turns out to be very convenient when doing actual calculations.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Kostya
answered Dec 2, 2011 by Kostya (320 points) [ no revision ]
+ 6 like - 0 dislike

It took the insights of Wilson and Kadanoff to answer this question. Universality. It doesn't matter all that much what the precise details in the ultraviolet are. Under the renormalization group, only a small number of parameters are either relevant or marginal. All the rest are irrelevant. As long as you take care to match up the relevant and marginal parameters, the precise regulator you choose doesn't matter. Even if it differs from the actual underlying physics, in the infrared, it still gives the same answers.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Fred
answered Dec 2, 2011 by Fred (60 points) [ no revision ]
If one discards (subtracts) a $\Lambda$-dependent term in one's calculation result, one obtains a "universality". Whatever numerical value of $\Lambda$ is, the result is $\Lambda$-independent. This is the underlying mathematics known well before Wilson and Kadanoff.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Vladimir Kalitvianski
I guess I'll have to wait till I am familiar with the renormalization group before I can satisfactorily understand my question. But why is it called "universality", to me that term connotes similar mathematics appearing in seemingly different branches of physics. Unless your are referring to the application of these integrals in both statistical physics and qft?

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Arun Nanduri
Universality? Is this something like scale invariance? And if so would it still hold if "new physics" is expected to kick in at high energies? Sorry if I`m mixing things up because I`m not sure enough about renormalization group etc ...

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Dilaton
+ 1 like - 0 dislike

My answer may be rather naive because I am not too familiar with this, however due to the same reason it may be more transparent.

As far as I understand, the whole idea of regularization is to get rid of $\Lambda$ as an explicit parameter of the theory and put the unknown to observable values. This is done by rather tricky limit $\Lambda\to\infty$ which leaves observable values (divergent without the procedure) finite.

Due to many different reasons leaving $\Lambda$ as an additional parameter of the theory is not satisfactory.

For details it is better to read references given in this community wiki.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Misha
answered Dec 2, 2011 by Misha (40 points) [ no revision ]
I think it's a question of regularize by introducing $\Lambda$ to make the integrals convergent, then renormalize - add counterterms to obtain results which are independent of $\Lambda$ to the desired order. The theory is renormalizable if you only need a finite number of parameters to do this.

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user twistor59
+ 1 like - 0 dislike

The ideas of renormalization and regularization are explained very well in this paper by Delamotte. Arxiv version. Perhaps this will help you.

Edit: same article is mentioned in the community wiki

This post imported from StackExchange Physics at 2014-06-06 02:46 (UCT), posted by SE-user Vijay Murthy
answered Dec 2, 2011 by Vijay Murthy (90 points) [ no revision ]
+ 0 like - 0 dislike

In order to understand to what extent one can ignore the UV for calculations with much smaller energies; let us consider a Hydrogen atom and a scattering process of a slow charged projectile from it. The projectile energy is supposed to be not sufficient to excite the atom, so the 'UV" part of the atomic degrees of freedom do not look involved in the scattering result, do they? But how about atomic (adiabatic or not) polarization? We cannot just neglect it before proving it is negligeable or reducible to another - effective- interaction constant.

In QFT many things are wrongly understood and defined, and corrections to the initial (but wrong) approximation) diverge. Regularization here is not about ignoring UV degrees of freedom, but a first step to renormalization (subtraction) of wrongnes introdused by us while QFT constructing.

answered Dec 21, 2019 by Vladimir Kalitvianski (102 points) [ revision history ]
edited Dec 22, 2019 by Vladimir Kalitvianski

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
$\varnothing\hbar$ysicsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...