Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,047 questions , 2,200 unanswered
5,345 answers , 22,709 comments
1,470 users with positive rep
816 active unimported users
More ...

  Advice about renormalizations in physics

+ 3 like - 2 dislike
5958 views

Here I would like to share my disappointment with the current state of affairs in theoretical physics and ask your advice. Some readers know that I am not happy about renormalizations in physics and I would like to specify a little what bothers me.

A quite good description is already given in Feynman lectures, Chapter 28, devoted to the electromagnetic mass of electron. As you may know, H. Lorentz tried to implement a new force in the successful phenomenological equation of motion of electron $$m\ddot{\mathbf{r}}={\mathbf{F}}_{ext}({\mathbf{r}}, \dot{{\mathbf{r}}},t)\qquad(1)$$

to take into account a small "radiation reaction" effect (for the sake of energy-momentum conservation). His reasoning was simple: as the electron was affected with electromagnetic forces, one should insert into Eq. (1), in addition to ${\mathbf{F}}_{ext}$, the entire electron-sourced field too. Calculations showed, however, that is was mainly a self-induction force preventing the charge from changing its steady state - a motion with a constant velocity $\mathbf{v}=const$: $${\mathbf{F}}_{self}=-m_{em} \ddot{\mathbf{r}}, \;m_{em}\to\infty.\qquad(2)$$

This expression had a dimension of force and it was strongly geometry- or model-dependent; it was very big for a small electron. Unfortunately, researchers of that time considered a quantum of charge classically - as a classical distribution of the charge density, as if the quantum of charge were built up with a collection of infinitesimal charges. H. Poincaré added cohesive forces to provide the distribution stability (Poincaré stresses, see the Feynman lecture), but the nature of these forces was completely unknown. I am sure some people were trying to work out this direction, but another "direction" has won. The self-induction contribution was simply discarded as wrong and the jerk "remainder" was tried instead: $${\mathbf{F}}_{self}=\frac{2e^2}{3c^3}\ddot{\mathbf{v}}.\qquad(3)$$

Note, this expression has also a dimension of force, but is of a different functional dependence in terms of electron variables.

In order to "justify" discarding the self-induction force, they invented the notion of a bare (negative) mass $m_0$ that had existed in successful Eq. (1) "before" taking the self-action into account: $$m_0 \ddot{ \mathbf{r} }= \mathbf{F}_{ext} - m_{em}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\ddot{\mathbf{v}},\qquad (4)$$

$$m_0(\Lambda)+m_{em}(\Lambda)=m.\qquad(5)$$

As in (1) it was the very physical mass (a phenomenological coefficient taken, for example, from mass-spectroscopic data), I find this invention unconvincing, to say the least. Indeed, a negative mass makes the particle move to the left when the force pulls it to the right. We never observed such a silly phenomenon (like that of a stupid goat) and we never wrote the corresponding equations. We cannot pretend that (1) describes such a wrong particle in an external field, but adding its self-action makes the equation right. Still, this is a mainstream position today. I do not buy it. I only see discarding, which is not a mathematical calculation nor a physical effect, but cheating. The crude self-action idea failed. But what about the remainder?

Fortunately, the jerk remainder is wrong too. This remainder cannot be used as it gives runaway solutions. Not a small radiation reaction, but a rapid self-acceleration. This whole self-action business can figuratively be represented as connecting an amplifier output to its input. It creates a feedback. First the feedback is strongly negative – no reaction to an external signal is possible anymore. After “repairing” this self-action, we get a strong positive feedback. Now we have a self-amplification whatever the external signal value is. No good either, although (3) is formally used in a "proof" of energy-momentum conservation (on average). (Here I disagree with R. Feynman who thinks the jerk is OK.)

One more attempt to get out from this impasse consisted in considering the jerk term contribution "perturbatively": H. Lorentz and others after him started to replace it with the external force time derivative: $$\ddot{\mathbf{v}}\to\frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext}(\mathbf{r},\dot{\mathbf{r}},t).\qquad(6)$$

It worked for an oscillator and was a great triumph of Lorentz's construction (remember the natural width of a spectral line). But here again, I notice cheating once more: in the true iterative procedure we obtain a given function of time $\dot{\mathbf{F}}_{ext}\left(\mathbf{r}^{(0)}(t),\mathbf{v}^{(0)}(t),t\right)$ on the right-hand side rather than a term (6) expressed via unknown dynamical variables. For example, in an oscillator equation (used by Lorentz) $$\ddot{y}+ \omega^2 y= \frac{2e^2}{3mc^3}\dddot{y}\qquad (7)$$
the first perturbative term $\dot{F}_{ext}^{(0)}(t)\propto \dot{y}^{(0)}(t)$ is a known external periodic (resonance!) driving force whereas the replacement term $\dot{F}_{ext}\propto \dot{y}$ is unknown damping force (kind of a friction): $$\ddot{\tilde{y}}+ \gamma\,\dot{\tilde{y}}+ \omega^2 \tilde{y}= 0,\quad \gamma=\frac{2e^2\omega^2}{3mc^3}.\qquad (8)$$

 A perturbative solution to (7) $y\approx y^{(0)} + y^{(1)}$ (a red line in Fig. 1)


Y_2

                                                              Fig. 1.


is different from a damped oscillator solution $\tilde y$ (a blue line in Fig. 2). Solution to a damped oscillator equation (8) is non linear in $\gamma$, non linear in a quite certain manner. It is not a self-action, but an interaction with something else. This difference in equations is qualitative (conceptual) and it is quantitatively important in case of a strong radiation reaction force (like in quark-gluon interactions) and/or when $t\to\infty$  (I used in this example $y^{(0)}=\sin\omega t$ with $\omega=10$ and $\gamma=0.3$). I conclude therefore that a damped oscillator equation (8) is not a perturbative version of (7), but is another guesswork result tried and left finally in practice because of its physically more reasonable (although still approximate) behavior. (I guess H. Lorentz intuitively wanted a "friction" force rather than a resonance driving force, so he "opted" for Eq. (8)). Similarly, expression (6) is not a perturbative version of (3), but another (imperceptible) equation replacement (see F. Rohrlich's last paper, page 10, reference [3]). 

The term $\frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext}(\mathbf{r},\dot{\mathbf{r}},t)$ is a third functional dependence tried for description of the radiation reaction force. I am sure there may be more. I think the radiation reaction force should in reality be expressed via some field variables, but it is another story.

Hence, researchers have been trying to derive equations describing the radiation reaction force correctly, but they've failed. For practical (engineering) purposes they constructed (found by trying different functions) approximate equations like (8) that do not provide the exact energy-momentum conservation and do not follow from "principles" (no Lagrangian, no Noether theorem, etc.). We may not represent it as a continuous implementation of principles because it isn't so.

Guessing equations, of course, is not forbidden, on the contrary, but this story shows how far away we have gone from the original idea of self-action. It would not be such a harmful route if the smart mainstream guys did not raise every step of this zigzag guesswork into "the guiding principles" - relativistic and gauge invariance, as well as renormalizability restricting, according to the mainstream opinion, the form of interaction to $j\cdot A$. Nowadays too few researchers see these steps as a severe lack of basic understanding of what is going on. On the contrary, the mainstream ideology consists in dealing with the same wrong self-action mechanism patched with the same discarding prescription ("renormalization"), etc., but accompanied also with anthems to these "guiding principles" and to their inventors. I do not buy it. I understand the people's desire to look smart - they grasped principles of Nature, but they look silly to me instead.

Relativistic and gauge invariance, as equation properties, are borrowed from inexact CED equations, hence as "principles" they may not guarantee the correctness of theory completion. Relativistic and gauge invariance (equation properties) must be preserved, nobody argues, but making them "guiding principles" only leads to catastrophes, so (6) it is not a triumph of "principles", but a lucky result of our difficult guesswork done against the misguiding principles. Principles do no think for us researchers. Thinking is our duty. Factually we need in (1) a small force like that in (6) or so, but our derivation gives (2). What we then do is a lumbering justification of replacements of automatically obtained bad functions with creatively constructed better ones. Although equations like (6) work satisfactorily in some range of forces, the lack of mechanical equation with exact radiation reaction force in CED shows that we have not reached our goal and those principles have let us physicists down. That is why we stick to renormalization as to the last resort.

Those who think the "derivation" above is mathematically correct, should not forget forgotten cohesive forces, which have their own self-induction and radiation reaction contributions too. Deliberately excluding them makes the "derivations" above even more doubtful.

Those who still believe in bare particles and their interactions, “discovered” by clever and insightful theorists despite bare stuff being non observable, believe in miracles. One of the miracles (rubbish, to be exact) is the famous “absorption” of wrong corrections by wrong (bare) constants in the right theory (i.e., the constants themselves absorb corrections as a physical effect, without human intervention). I admit renormalization may sometimes "work", but I do not see any physics in it; on the contrary, I see a wrong physics here. I believe we may and must reformulate our theories.

I was trying to explain the luck of renormalization prescription and the usefulness of reformulation elsewhere, without success though. Hence, my question is: am I not fooling myself?

asked Jul 12, 2014 in Theoretical Physics by Vladimir Kalitvianski (102 points) [ revision history ]
edited Jul 15, 2014
Ok while I first looked with a certain sympathy on this questions, the last paragraphs inserted look now a bit too much like an unjustified tirade against modern physics as a whole to me. None of the methods and principles mentioned negatively have been experimentally refuted, on the contrary they are successful. The physics is still some kind of interesting, but the tone has become a bit too polemic ...

@Dilaton: The problems in constructing good equations started long long ago and they were widely discussed during the last century.

Experimental data are one thing, a theory is another. There may be many theories describing the same data to the same precision because any theory is an approximation. You imply that the theory is unique as soon as it agrees with data to some precision. It is not a serious claim.

2 Answers

+ 7 like - 0 dislike

I hope the question at the end of your posting is genuine, and not just rhetorical.

I think you misunderstand what happens in renormalization. The goal there is not to arbitrarily and miraculously add or drop terms in a bare theory to make it work. Rather the goal is to describe a theory that is consistent at a certain relaxed level of mathematical rigor and has certain desirable features (such as locality in QFT). What looks like a mathematically ill-conceived ``derivation'' is in fact just a sloppy description of a perfectly admissible limiting construction. The theory of interest is represented as a limit of other theories that have additional parameters (bare parameters and cutoffs) and behave well mathematically, but that do not have the wanted property, which appears only in the limit.

Only the limiting theory and its parameters are physically relevant, while the theories and parameters used in the approximation process are just mathematical crutches necessary to define the limiting theory. (This is like a Taylor approximation to the exponential. At finite order $n$, the resulting cutoff-exponential doesn't satisfy the differential equation defining the physical meaning; only the limit with cutoff $n\to\infty$ has a physical meaning.)

Specifically, in theories involving mass renormalization, one represents the desired theory as a limit of a family of auxiliary theories involving a bare mass $m_0$ and a cutoff $\Lambda$. (In general, there are other coupling constants serving as bare parameters.) The cutoff makes the theory nonlocal, an unphysical feature. This is compensated by treating the bare mass $m_0$ also as unphysical. (``bare'' just translates to ``unphysical''.)

The auxiliary theories are mathematically well-defined, and one can derive from them equations for observables that resemble the physical observables. However, due to the unphysical starting point, the physical content of the resulting equations is somewhat implicit.

In particular, constants have a physical meaning only if they are expressible in terms of intrinsic properties of dynamical observables (as frequencies,  zeros, poles, residues, etc.). Such physical constants when computed at a particular cutoff $\Lambda$ are usually complicated functions  $f(\Lambda,m_0)$ of the cutoff and the bare mass. If an expression defining a physical constant is evaluated at fixed bare mass and different cutoffs it varies rapidly with the cutoff, which is an unphysical situation.

However, if one takes an appropriate cutoff-dependent value $m_0(\Lambda)$ for the bare mass, one can keep $f(\Lambda,m_0(\Lambda))=f_{phys}$ constant, thus creating a physically realistic scenario. Assuming that $f$ is chosen to represent the mass, the physical mass in the cutoff theory is the one defined by $m(\Lambda,m_0(\Lambda))=m_{phys}$, - not $m_0(\Lambda)$, which still is only a coefficient without a physical interpretation.

It turns out that for renormalizable theories a small, finite number of such renormalization conditions are sufficient to determine a set of physical constants and of cutoff-dependent bare parameters (``running'' masses and other coupling constants) such that in terms of these physical constants, the field equations have (at least in an appropriate approximation) a good limit when the cutoff is removed, i.e., the limit $\Lambda\to\infty$ is taken. This is the necessary and sufficient condition for a renormalization scheme to ``work''. 

As regards QED, there are treatments that do not introduce any nonphysical terminology: The formulation of QED in Scharf's book ``Quantum Electrodynamics'' introduces nowhere bare particles, bare constants, or infinities. Scharf's book is mathematically rigorous throughout. He nowhere uses mathematically ill-defined formulas, but works with microlocal conditions appropriate to the behavior of the Green's functions. These enable him to solve recursively mathematically well-defined equations for the S-matrix by a formal power series ansatz, which is sufficient to obtain the traditional results.

answered Jul 13, 2014 by Arnold Neumaier (15,787 points) [ revision history ]
edited Jul 13, 2014 by Arnold Neumaier
Most voted comments show all comments

Your post makes an essential error when you say that ''in (1) it was the very physical mass''. If it were physical, one could deduce from the equation a way to determine it in terms of dynamical observables, but one cannot. Thus even in (1), $m$ is just a bare parameter without physical meaning.

Your first comment shows that you do not understand what Scharf is doing. See http://www.physicsoverflow.org/20325/

As to Eq. (1), if one removes the external force which is physically possible since it is external) one ends up with an unphysical bare particle (since without self-field), which shows that $m$ is a bare mass only.

But also when one includes a self-field, one cannot interpret $m$ as a physical mass. The latter would have to be tested by the influence the particle has on an external measuring apparatus for measuring the mass, and doing the calculation would most likely show that the physical mass is something different from $m$. I haven't done the calculation. but the physical mass would be deduced from the lowest order of a gravitational form factor, and it is extremely unlikely that this should just be the exact coupling parameter. If you'd like to substantiate your claim that $m$ is the physical mass, you'd have to give proof by showing why a measurement gives this value.

you are immune to proof (just look at the amount of discussion generated in vain), so I discontinue this discussion.

@VladimirKalitvianski Feynman died 24 years ago. Feynman didn't appreciate string theory either. Even Weinberg was uncomfortable with it. Stop using the authority of dead people to support your claims.

By the way, I'm sure that Feynman wouldn't approve of "reformulation not renormalisation!" either : )

From now on, I reply to your comments only if they contain no derisive remarks directed towards the mainstream approach.

The remark about doctoring numbers is not mine, but that of Dirac's. Okey, my questions remain unanswered.

I tried to answer your questions, which are about classical renormalization. The classical renormalization is not the same as the quantum renormalization, their overlap is nonexistent.

+ 5 like - 0 dislike

This question is about the renormalization procedure applied to classical electrodynamics. In classical electrodynamics, renormalization is perhaps surprisingly more difficult and less consistent than in quantum electrodynamics.

The main difficulty is that the classical field contribution to the mass of an electron cut off to radius R goes as $e^2/R$, which is linearly divergent. The classical field-mass becomes equal to the mass of the electron when R is in magnitude equal to the classical electron radius $e^2/m^e$. Making the classical electron smaller than this leads to a negative classical bare mass, and the unphysical bare pointlike electron limit produces negative-mass inconsistencies as a by-product.

The basic inconsistency is that a negative mass bare classical electron can accelerate to very fast velocities, making a larger and larger negative energy, at the same time radiating energy in the electromagnetic field, keeping the total energy fixed. These are the self-accelerating exponentially blowing up solutions which come from naively integrating the equation of motion with third-derivatives and no special constraints on the motion.

Dirac's attempted solution to this was to reject the self-accelerating solutions by a teleological constraint, you demand that the solution to a third-order equation be well behaved asymptotically. This more or less produces physical behavior at normally long time and distance scales.

As you noticed in the body of your question, this is also automatically is what happens when you treat the third-derivative radiation-reaction term perturbatively, because the perturbation series starts with solutions to the second-order Newtonian equations, and the perturbation series can be made to avoid the exponentially blowing up solutions. This is why the perturbation description hides the fundamental inconsistency of the classical theory.

The Dirac approach, rejecting the self-accelerating solutions, gives physical motion, more or less, but it produces non-causal behavior--- there is pre-acceleration of the electron in the Dirac theory. This means that if a classical electromagnetic step-function wave is going to hit an electron, the electron responds a little bit before the wave hits, the acausal response is exponentially decaying in time with a scale equal to the classical electron radius. This is why the classical renormalization program ultimately fails at the classical electron radius, you simply need structure. The electron is just being artificially called a point in the dirac approach, the pre-acceleration reveals it is really extended, and the scale for new structure demanded by classical physics is, as always, the classical electron radius.

But for quantum electrodynamics, the classical analogy is misleading, at least for small values of the fine-structure constant. The first miracle of quantum electrodynamics is that the self-field of the electron, despite classical physical intuition that it should diverge linearly, is only logarithmically divergent. This was first demonstrated in the old perturbation theory days by Weisskopf, but it's obvious today in the relativistic perturbation theory--- the contribution of the self-field of the electron is from a diagram where you have an electron line with a photon going out and coming back to the same line, and the short-distance divergence is when the proper travel-time of the electron is short. This diagram, when regulated, only diverges as the log of the cutoff, same as every other divergent one-loop diagram in QED. The  modern covariant methods make the result too easy, they end up hiding an important physical difference between quantum and classical electromagnetism.

What's physically softening the classical linear divergence? The main reason, explained by Weisskopf, and made completely obvious in the formalism of Stueckelberg, Feynman, and Schwinger, is that the intermediate states for short times between photon emission and absorption involves a sum over both electron and positron states together, as for short propagation times, you shouldn't separate relativistic electron from positron states.  The contribution calculated artificially using only electron intermediate states between emission and absorption is linearly divergent, just as in the classical theory, and of course the same holds for positron self-mass truncating to only positron intermediate states. But for the true relativistic field theory calculations, you have to add both contributions up, and the cutoff has to respect relativistic invariance, as in Pauli-Villars regularization, so it must be the same for both positrons and electrons. The contributions of positrons are opposite sign to the contributions of the electrons, as the positron field is opposite sign, and cancels the electron field. When the distances become short, the main classical linear divergence is cancelled away, leaving only the relativistically invariant log divergence, which is the short-distance renormalization mass correction considered in renormalizing quantum electrodynamics. Notice that this ignores completely the classical issue of the response of the self-field, which is important for distances larger than the Compton wavelength. For those large distances, positron contributions can be naturally nonrelativistically separated from electron contributions, and contribute negligibly to the field, so that it turns into the classical point field for the electron.

One physical intepretation is that the electron's charge is not concentrated at a point on any time-slice in quantum electrodynamics, but the back-and-forth path in time of the electron means that the charge on any one time slice has both electron and positron intersections, and the charge ends up fractally smeared out over a region comparable to the Compton wavelength, partially smoothing the charge and mollifying the divergence.

The ratio of the classical electron radius, where the classical renormalization program breaks down, to the Compton wavelength $1/m_e$, where relativistic quantum renormalization kicks in,  is the fine-structure constant by definition. The fine-structure constant is small, which means that the linearly divergent corrections to the self-mass are replaced by the log-divergent corrections before getting a chance to make a significant contribution to the self-mass of the electron, or any other pointlike charged particle (small, but not completely negligible. For pions, due to the Goldstone nature and small mass, the electromagnetic field explains the mass splitting, as understood in the 1960s). So the classical inconsistency is side-stepped for a while, just because the divergence is softened.

But the problems obviously can't fully go away, because if you were to twiddle the parameters and make the fine structure constant large, you make the classical electron radius larger than the Compton wavelength, at which point the classical self-field surrounding a nonrelativistic electron, even in the nonrelativistic limit, would have more energy than the electron itself, and the renormalization procedure would break down, requiring unphysical properties at a normal accessible scale.

But in the case of our universe, with small fine-structure constant, the log running means that the problems don't kick in until an absurdly high energy scale. Still, they do kick in, and this is at the enormously large Landau pole energy, which is where the coupling of the electron runs to big enough that the fine structure constant is no longer small. This energy is larger than the Planck energy, so that at this point, we expect the field theory description to break down anyway, and be replaced by a proper fundamental gravitational theory, by string theory, or a variant.

This part is old well known, and explains the disconnect between the modern quantum renormalization program and the old classical renormalization program. The quantum renormalization program works over an enormous range, but because of the technical differences of the divergences, it simply doesn't answer the classical renormalization questions.

But one can go further today, and consider different cases, where one can gain some insight into the renormalization program. There is a case where you do have a large value of the fine-structure constant: magnetic monopoles. The magnetic charge is inverse to the electric charge. The magnetic monopoles in, say, the Schwinger model (SU(2) gauge theory Higgsed to U(1) at long distances) can't be inconsistent, as it is asymptotically free at high energies. But in this case, the monopole is not pointlike, in this, it is an extended field configuration. The soliton size is determined from the Higgs field details, and when the Higgs field is physical, there is no renormalization paradox for monopoles either--- monopoles in consistent gauge field theories are simply solitons which are bigger than the classical monopole radius. They do have a third-derivative radiation reaction term in the equation of motion at long scales, but it isn't a paradox, because the pre-acceleration can be understood as a response of parts of the monopole to a field, and it isn't acausal. Runaway solutions can't occur, because there is an energy bound for a moving solution, the never is any negative energy density anywhere, unlike in the classically renormalized point particle theory, so you can't balance a negative mass moving faster and faster (and gaining negative energy) with a field gaining positive energy.

It is interesting to consider the classical issues in the gravitational context, now that we have understanding not only of the quantum renormalization program, but also of quantum gravity in the form of string theory, which solves the quantum renormalization problem for good. The classical long-distance version of string theory is general relativity, and the case, the classical charged point particle analog is an electrogravitational solution, a black hole of mass m with charge e. In order for the black hole to be sensible, $m>e$, and in this case, the classical solution is classically well behaved, but it is not pointlike--- it has a radius of size m. The pointlike limit requires taking m to zero and therefore e to zero, and the electromagnetic self energy e^2/m is not only less than m, it is less and less a contribution to the mass as e gets smaller.

In the case that you choose e large, it seems that the extremal black hole would get a negative "bare mass". But the behavior of the larger than extremal black hole is always perfectly causal. The apparent negative mass required here is nonsense, you are just ignoring the negative gravitational field energy--- the extremal black hole can be interpreted as the case where the total field energy, electromagnetic plus gravitational, is equal to the total mass.

In string theory, the quantum analog of the extremal black hole solutions are the fundamental objects, these are the strings and branes. Here, there is an interesting reversal: the classical restriction for black holes, $m\ge e$, is reversed in string theory, for the special case of the lightest charged particle. In this case, the requirement that a black hole can decay completely suggests strongly that the lightest charged particle must obey the opposite inequality, that is, that $m\le e$ for the lightest quantum.

answered Jul 30, 2014 by Ron Maimon (7,720 points) [ revision history ]
edited Jul 30, 2014 by Ron Maimon

Hi Ron, I still have to read this but it always seemed to me that this question and that submission are closely related, so could this answer serve as a review there too ?

Dilaton, it cannot serve as a review because it does not consider my paper with its arguments.

This answer is about the failure of classical renormalization, the old program of renormalizing classical electrodynamics. It's about the radiation reaction runaway solutions, and their interpretation, and why this failure doesn't appear in quantum electrodynamics until the Landau scale. It has no real relation to Kalitvianski's paper, which has a model, not this old classical nonsense inside.

Thank you, Ron, for your valuable answer! I am ready to believe in bare particles and their interactions, guessed correctly by insightful physicists. I used to think that the bare mass was a rug brought by some physicists to cover their shit. But now I am sure the physicists guessed the interaction correctly and discovered (predicted) bare particles as a by-product. Interactions change constants('t Hooft)! How could I doubt it when it had been shown so many times in so many textbooks? I was blind. Now I see, I see the physics of very short distances. Although it cannot be seen in a microscope, I see it. It is in my mind.

And if one day someone advances a crazy interaction force on the right side of Eq. (1) and the same crazy term on the left side of Eq. (1) as a property of the bare particle (like in Eq. (4) with (5)), I will believe in both - in crazyness of the bare particle and in crazyness of its interaction. Too bad these two terms cancel each other. I would love to see them in the calculation results. But, probably, it is their fate, their destiny - to disappear for good. Nevertheless they will stay in my mind. Because physicists are not crazy by definition, but insightful.

No, frankly, Ron, do you think it is impossible to write mathematically down an equation system in CED with the right physics including exact conservation of the energy-momentum? Tell me, is it impossible? Some projectile pushes the electron, the latter accelerates and radiates, and it is impossible to make ends meet in this simple problem?

Another thing, I see bare particles exist in your world and it goes without saying. I thought we dealt with physical particle in (1) and adding any correction to its mass was undesirable. You say in QED the correction is smaller so QED is much better. Then why don't you leave it in the calculation results? Why it disappears if it has some physics in it? We calculate, calculate, analyse the great physics of our calculation and in the end there stays nothing of it. To be precise, let us consider an external electron line. It is a Dirac bispinor, for short. What is the effect of taking into account all radiative corrections (self-energy terms) in it? What does it become? The same Dirac bispinor?

With a lattice at the Planck scale, or any other cutoff at the Planck scale, the Dirac bispinor, after all radiative corrections, stays a bispinor with slightly different mass and slightly different charge.

We leave the divergence in the calculation results! The logarithmic divergent renormalization of the charge and mass is not a mathematical trick, it doesn't just get rid of the infinities, they are still there, and they show up as physical effects in higher and higher energy scattering. The counterterm contribution stays in the results! The infinity doesn't go away by subtraction, it still shows up in the physics.

For example, when you scatter an electron from a photon at center of mass energy squared s, the deflection in QED is given by the Klein-Nishijima formula, and if you do a best fit to find the electron charge e from the actual scattering in pure QED, you find a different value of e at each s, e grows logarithmically with s, and doesn't ever stop growing. As E gets larger, e also gets larger, and eventually, at unphysically large energies, the theory stops making sense, because e is greater than 1. This is the running coupling constant, and it is the original renormalization group of Stueckelberg and Petermann, Gell-Mann and Low: the subtraction point is arbitrary, so you can recenter the calculations at any energy E.

There are no true bare particles in any modern formulations, only particles defined at a cut-off scale $\Lambda$. The cutoff can be mathematically well defined, for instance a lattice, so just imagine QED on a lattice. With a Planck-sized lattice, a small lattice coupling and a small lattice electron mass parameter, the long-wavelength solutions have a light spin-1/2 particle with only a somewhat different mass, and a somewhat different coupling.

You then introduce a second scale, just for mathematical convenience, this is the subtraction scale, and define all your calculations relative to the subtraction scale. Instead of asking how the mass and charge depend on the lattice, you can then ask how the mass and charge of the best-fit interactions depend on the subtraction scale. It's just for mathematical convenience, the subtraction scale is arbitrary, but the divergences in the theory show up as actual physical changes in the predicted scattering as you change the energy. This doesn't make sense when the subtraction scale approaches the lattice scale, but for long-distances, you can define the dependence with no problem.

If you ignore the lattice, and just consider the perturbation series (which now only depends on the subtraction scale instead of the lattice scale), the couplings as a function of the energy goes slowly to infinity as the energy gets large. This is a sign that you need to have something else happening at large energy, like a lattice, or strings, or whatever. The energy is sufficiently enormous in this case that nobody cares.

The fundamental issue is that there are two different notion of particle:

1. Lagrangian particle: a field you path integrate over.

2. S-matrix particle: an energy eigenstate determined by 4-momentum, mass and spin.

The Lagrangian particles are mathematical things, they are defined by doing a path-integral, by mathematical equations. The S-matrix particles are what you observe in experiments, they are defined by the eigenstates of the Hamiltonian, by the solutions to the equations. The perturbation theory at long distances, if you use a physical subtraction, is using the real physical particles to do the perturbation theory, and because it is ignoring the lattice, it is choosing the counterterms in the Lagrangian to zero out the effect of higher loops.

But it's a mathematical trick, you don't have to use physical subtraction, the subtraction procedure is largely arbitrary and chosen for convenience, so you can use a different process, like dimensional regularization/minimal subtraction with a free-floating subtraction point, and then you are expanding in terms of particles whose interactions are gradually changing over many decades, approaching more or less the lattice value when the cutoff approaches the lattice value, and the whole renormalized perturbation theory breaks down.

For QED with a not-too-insanely-big cutoff, the Lagrangian particles, the electron and the photon fields, are qualitatively of the same type as the physical S-matrix particles, except they have somewhat different values of the parameters. The logarithmic dependence means that the change is not very big for ordinary values of the cutoff. The cutoff is a real thing--- you can make lattice QED, and then it's the lattice scale. You can take Pauli Villars QED, and then it's the mass of the regulator.

 In QCD, it's exactly the opposite. The S-matrix spectrum is mesons and baryons, and the Lagrangian is quarks and gluons. So the bare particles are quarks, but they are not unconfined S-matrix particles, you don't see them in the physical spectrum.

@RonMaimon: Thank you, Ron, you are a very generous and patient person!

I understand what you are writing. Still, there are some points to clarify. The lattice is necessary for some sort of regularization in the standard version of QED. In this sense, it is like any other regularization and I do not mind - the standard QED needs it. As well as renormalization.

Concerning using the Klein-Nishina formula for fitting the charge $e$ as a function of $s$. It is slightly strange because once fitted, $e$ should remain constant. Normally we fix it with the low frequency photon scattering, the Thomson formula, to be exact. If we apply the Klein-Nishina formula for fitting $e$ at higher $s$, it may well happen that, due to inaccuracy of this formula, we transfer its inaccuracy to the charge, as if it indeed depended on $s$. I hope you meant something else here. I guess you speak of a bare charge, which is not constant, and I agree because it was devised so. Call it a charge on a lattice, whatever, the meaning is the same.

One more thing. Once, I encountered a wrong perturbative series (Appendix 3). It was wrong because of my wrong perturbative treatment of "interaction" (or "perturbation" operator). The wrongness started from the third order in powers of the small parameter. This small parameter could be determined from independent measurements and the exact calculations, i.e., from calculations without this error. But imagine, if we had no correct calculation results and tried to determine the value of this parameter from experiments only compared with the wrong series at disposal. Then we would obtain a wrong value and this value could depend then on something else instead of being a constant by definition. It is a very dangerous situation. You can always fit the parameter with making it "running" somehow, no problem, but there would be a "wrong physics" in it.

I feel myself a little bit tired morally, seeing my low reputation here as a researcher. Probably, I will take a break.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar\varnothing$sicsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...