Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Controversial discussions about renormalisation, EFT, etc ...

+ 1 like - 0 dislike
36213 views

This thread is meant to contain general discussions and points of view about renormalisation and EFT which are not shared by the current mainstream physics community.

asked Sep 8, 2014 in Chat by SchrodingersCatVoter (-10 points) [ revision history ]
edited Mar 27, 2015 by dimension10

6 Answers

+ 1 like - 0 dislike

I was wondering if your complaint is simply about the form of the interaction Hamiltonian. In three dimensions, renormalization of superrenormalizable field theories is mathematically completely understood.

If you look at 3d field theory, the interaction Hamiltonian for scalar $\phi^4$ can be written with the counterterm absorbed in the interaction, so that the physical mass is the parameter in the equation. The equation for stochastic time evolution is explicitly written down by Hairer with the linear term equal to the mass of the physical state, and the interaction term includes the counterterm precisely.

The limiting procedure to define the theory then moves the counterterm linearly along to reproduce the correct results in the limit. The procedure is entirely rigorous, and you can't complain.

The same thing would be possible in 3d QED, which also has the same radiation reaction issues as in 4d QED, except that here the coupling is superrenormalizable, so one can define the theory nonperturbatively in a similar way. This explains what people would like to do to define renormalized theories, and that it is completely consistent. It also does what you want, in that the interaction term includes the counterterms inside naturally, and is understood to all orders in a sense, because you know how to take the limit physically.

This procedure is not applicable in 4d, because the interaction is not superrenormalizable. But your complaints don't depend on thedimension of space.

answered Sep 9, 2014 by Ron Maimon (7,730 points) [ no revision ]

Yes, to a great extent it is a form of interaction Hamiltonian. To a great extent, but not only. The free Hamiltonian is essential too. In my "Toy Model" paper it is explained in details - what may be wrong, what one obtains after renormalization and what one obtains after the soft modes summation. The correct Hamiltonian has a physically reasonable free Hamiltonian, a physically reasonable interaction Hamiltonian, and the corresponding physics of permanently interacting constituents.

It is possible to do the perturbation theory using the "physically correct Hamiltonian" as a starting point, meaning a Hamiltonian where the physical mass and zero-wavelength charge are the terms in the bare Lagrangian. That's what everyone did in the 1950s and 1960s, before they introduced some convenient terms to allow them to do it without a fixed energy, to deal with the special cases of massless or confined particles, complications which don't appear in pure QED, at least not if you are concerned with low-energy physics only.

This changes nothing at all regarding the non-perturbative behavior, because the series doesn't approximate anything well-defined in the continuum limit as far as anyone can see, and you haven't said anything which can change this conclusion (you don't even know this conclusion).

The sum of the infinite series (or any truncation) is not sensible at large alpha. It is also not sensible at any alpha, and at a value of log(p/m_e) which is larger than 1/alpha. This means that it makes no difference how you write it down, you can't make the theory well defined, it will never be well defined, because it is fundamentally incomplete.

If you wish to complete QED, you should know that it is not clear you will succeed, because then you would need to understand QED with a large coupling. The only way we know to make sense of QED with large coupling is to add more stuff. The first thing is monopoles, the next thing is supersymmetry, and in certain cases, like Argyres-Douglas points, you can get interacting monopoles and charges which make a consistent theory because they are embedded as low-energy limits in a larger theory which makes sense. The extent to which this is possible and defines QED is still debated, because we can't make sense of things without monopoles and supersymmetry, and we certainly can't make sense of physical QED with just an electron and photon and any given nonzero fine structure constant. All we can do is write a perturbation series of dubious validity which agrees with experiment very well, but surely breaks down.

OK, Ron, I am not ready to answer, I am exhausted.

+ 0 like - 0 dislike

@RonMaimon: "Can you tell me what sense QED can fail to make sense ...?"

Yes, I can. For that, let me invoke a correcr CED as an example. As a good approximation to the correct CED, we can take a CED with $\dot{\mathbf{F}}_{ext}$ as a radiation reaction force. What does such a CED predict? The energy-momentum of the whole system is conserved (approximately with $\dot{\mathbf{F}}_{ext}$, but it is not essential at this moment). However, when we consider a couple of oppositely charged charges in a bound state, the radiated energy will be infinite in the end ( a collapse). To improve the agreement with experiment we are obliged to use QM which is change of equations rather than fitting constants within CED. Similarly in QED, it does not describe all particles and in this respect it fails. But as a model, it can be formulated correctly, without bare stuff/counter-terms.

answered Sep 9, 2014 by Vladimir Kalitvianski (102 points) [ no revision ]
reshown Sep 9, 2014 by Ron Maimon
Most voted comments show all comments

@RonMaimon: The more I think of a strong coupling regime, the more I get confused. We know that an external field can create real pairs, for example, in a condenser. These pairs separate from each other and neutralize the opposite charges on the condenser plates, so the final system configuration is more stable (with a smaller electric field). One electron state in a QED with high $e$ cannot decay into pairs because of conservation laws, seems to me, but maybe I am missing something. For example, there is a formula for the ground state energy that contains $Z$ and $\alpha$ and which becomes imaginary at hight $Z$. Normally they say that one particle approximation in an external field is not valid anymore and such a state creates pairs. I do not know what we may expect as an exact solution for single charge in QED in the regime of strong coupling. I admit that such an imaginary energy may well appear somewhere and the theory will break. I have no clear idea.

Arnold, I mean QED with counter-terms where the physical constants remain the same and do not get any perturbative corrections.

Because if you claim it is mathematically well defined at one alpha, it should be well defined at all alpha. So show me how it is well defined for large alpha, and if you can't, stop claiming you see it is well defined at small alpha. The only difference between the two is that at small alpha perturbation theory takes a long time to crap out and give nonsense.

The way to see that the small alpha perturbation theory craps out at either extremely high order at normal energies, or else at first order at super-high energies even when alpha takes the physical value is to resum the terms with the most logarithms in the subtracted perturbative calculation of scattering.

This resummation gives the same exact nonsense for scattering at high energy (with small alpha) that you see for scattering in large alpha limit. The shorthand way of saying this is that the running coupling is large at large center-of-mass scattering, but the result does not require you to say the coupling runs. You can say the coupling is fixed at the physical value, as you like to do, and then you are simply summing alpha log(p/m_e) + alpha^2 log(p/m_e)^2 + alpha^3 log(p/m_e)^3 etc to find the scattering at large p.

The appearance of a log(p/m_e) term at one-loop order is experimentally verified, it can't be gotten rid of. The log(p/m_e)^2 term at two loops appears also experimentally, and the all orders extrapolation is fixed from unitarity and relativity, it is the only perturbation theory possible (and your formulation would have to reproduce it)

From these logarithms, when log(p/m_e) is order 1/alpha for any nonzero value of alpha, you have a sum with an effective alpha order 1, and when log(p/m_e) is 1,000,000/alpha the sum has an effective alpha of order 1,000,000. But you can't make sense of this perturbation series.

I am asking you to stop being irresponsible and stop claiming you can fix QED! If you can fix QED you would be able to say what happens at strong coupling, and you can't tell me what happens at strong coupling by your own admission . What that means is that you are doing the same stupid perturbation theory that everyone else has been doing with at best a philosophical renaming of terms.

OK, thanks, Ron, for explaining the QED failure at high energies of the projectile. I am so tired with discussions with Arnold that I give up.

Just for your information (from my experience): if you only sum up the leading logarithms (rather than all terms), you can obtain a wrong result.

Don't get me wrong, I don't think that this is the last word regarding QED. But to get beyond this, you should know the leading logarithm sum behavior. It's not something magical or complicated, it just is what it is.

To get beyond this, you need a non-perturbative formulation, and you can do this with a lattice. Then you can simulate pure QED very easily (people have been doing this in recent years). The nice thing is that on the lattice, you can also study Argyres-Douglas points, supersymmetric theories, and also just phenomenonlogical actions with monopoles of variable mass and spin.

Schwinger likely believed that adding monopoles would fix QED nonperturbatively, he worked on this in the 1960s, likely so did Dirac. But the justification for this belief only is getting strong in the 1990s with Argyres-Douglas work on Seiberg-Witten solutions of supersymmetric theories. You don't need to follow all the work to simulate monopole theories, and it is possible that there is a simple QED modification which is clearly sensible, but it would at least require monopoles to deal with strong coupling configurations.

The reason is that strong coupling QED on a lattice is also understood since Wilson and it is qualitatively different from weak coupling QED, in that it confines! So this means that there is a qualitative transition in QED on a lattice, between weak and strong coupling regimes, and this transition has not been studied or understood, and it can be, because  you can calculate it on a computer.

Thanks a lot!

Most recent comments show all comments

Ron, as I said, I can guarantee nothing for high alpha.

Concerning Landau, he was trying to figure out what the value of a bare charge should be to observe the real charge $e$. Whatever finite value $e_B$ of the bare charge is, it is completely screened. That is why the only way to "choose" $e_B$ is to make it "running". It proves nothing but impossibility of such a screening physics. Counter-terms subtract it.

That's what you read in books. What Landau saw is exactly what you are seeing in attempting to do calculations at large alpha, and further that the inconsistencies don't go away as alpha gets small, because as you make alpha smaller, the inconsistencies only move to smaller distances.

If you can guarantee nothing for high alpha, you haven't guaranteed anything at small alpha either. If you make the claim that you can define physical scattering at some finite alpha, you should also at the same time sort out what happens at large alpha. If you can't, you don't know what is going on at small alpha either.

+ 0 like - 0 dislike

Continued from a tangential discussion at http://www.physicsoverflow.org/31682

The sentence

How can you "generally believe in theory" if the high energy behavior is certainly handled in a wrong way and you know it?

also contains (again) the unscientific neither theoretically nor experimentally founded claim that there is something wrong with how the high-energy behavior is handled in current modern (quantum field) theories.

answered Jun 7, 2015 by Dilaton (6,240 points) [ revision history ]
edited Jun 7, 2015 by dimension10
Most voted comments show all comments

I found the OP's (of the other post) approach very poorly presented (in spite of his second attempt) - too poorly to even make a constructive comment. This is enough to close it.

But even a better presentation would seems to me to be far too simple-minded, level $<$ graduate+ and hence closeable. (All infinities cancel exactly when handled properly, but because of nontrivial relations. The latter are not present if one doesn't take a proper limit, i.e., carry through the renormalization; so the PO's simple-minded  idea is seriously flawed.)

@VladimirKalitvianski: Scharf _is_ mainstream; no one in mathematical physics regards a theory with cutoff to be a good relativistic QFT. Poincare invariance is a must, and it is valid only if there is no cutoff.

@VladimirKalitvianski: We have lots of them  constructed rigorously in 2 and 3 dimensions, independent of Scharf.

We also have lots of them in 4 dimensions, though only on the level of formal series - the first one being QED as constructed by Feynman, Tomonaga, and Schwinger with three different approaches, for which they received the Nobel prize. Any form of renormalized perturbation theory constructs these in a Poincare invariant form. Some of them use cutoffs as a temporary scaffolding for construction, causal perturbation theory doesn't. in any case, the final result is in all cases free of cutoffs and bare stuff. 

@VladimirKalitvianski: Then your previous comment is misleading.

@VladimirKalitvianski: Sarcasm has no place in scientific discussions. By being sarcastic you only poison the atmosphere.

Most recent comments show all comments

I just underlined sarcastically the difference between your causal position and QFT-with-cutoff ideology and lattice practice.

By sarcastic remarks I make you write better-thought responses.

+ 0 like - 0 dislike

This comment discussion originated below an answer to a question about Stückelberg renormalization which has as such nothing to to with integrating out any degrees of freedom ...

answered Oct 25, 2016 by Dilaton (6,240 points) [ no revision ]
Most voted comments show all comments

Renormalization indeed redefines the parameters, but it is not just an innocent "parameter transformation in such a way that the limit can be taken". There is no freedom in parameters. Parameters are (re)defined in such a way that gives the desirable answer. This is what P. Dirac called "doctoring numbers".  And I say it even simpler: in order to obtain the right result from a wrong one, one need to discard the wrong part and leave the right one.

Any other choice of parameters leads to wrong results, so no freedom exists if you care about the correctness of the final result.

In QFT we remove unnecessary corrections, not the secular terms.

The term " renormalization" is applied in statistics too, after changing the number of events in a sample, roughly speaking. There are different fields where the term "renormalization" is used and the procedure means namely it. It has nothing in common with my "interpretation".

The secular terms in relativistic QFT are infinite, hence must be removed. Renormalization is therefore necessary in relativistic QFT. If you consider QFT in a 1-dimensional space-time where space consists of a single point only you get precisely the anharmonic oscillator.

Thus the relation is immediate, and not far-fetched as your example from statistics.

My oscillator toy model is one soft QED oscillator rather than "far fetched" thing. OK. I do not want to continue to argue with you.

@Dilaton: Some light on whether Stückelberg and Wilson renormalization approaches have something in common:

Listen what David Gross says at t=20:33 about Dyson's interpretation of RG.

Most recent comments show all comments

@VladimirKalitvianski: Yes, I am biased towards the truth and against speculation.

I explained that renormalization is already used for classical nonrelativistic anharmonic oscillators, and that there the bare frequency is physically meaningless, too. This confirms the large existing literature on classical and quantum, nonrelativistic and relativistic renormalization, and shows that your interpretation of the renormalization process is faulty.

You, on the other hand, define a nonrelativistic oscillator toy model (which is fine) but then use it to speculate on relativisitic issues where neither literature exists nor your model gives any insight for how to proceed in a covariant way. It is only these speculations that I reject. 

@ArnoldNeumaier: Nothing shows that my interpretation of the renormalization process is faulty. Calling it a speculation is not a proof, Arnold.

+ 0 like - 2 dislike

Discussions about one of t'Hooft's papers:

There was a period when renormalization was considered as a temporary remedy, working luckily in a limited set of theories and supposed to disappear within a physically and mathematically better approach. P. Dirac called renormalization “doctoring numbers” and advised us to search for better Hamiltonians. J. Schwinger also was underlying the necessity to identify the implicit wrong hypothesis whose harm is removed with renormalization in order to formulate the theory in better terms from the very beginning. Alas, many tried, but none prevailed.

In his article G. ‘t Hooft mentions the skepticism with respect to renormalization, but he says that this skepticism is not justified.

I was reading this article to understand his way of thinking about renormalization. I thought it would contain something original, insightful, clarifying. After reading it, I understood that G. ‘t Hooft had nothing to say.

Indeed, what does he propose to convince me?

Let us consider his statement: “Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable”. It is rather strong to be true. An exaggeration without any proof. But probably, G. ‘t Hooft had no other experience in his research career.

“A natural feature” of what or of whom? Let me precise then, it may be unavoidable in a stupid theory, but it is unnatural even there. In a clever theory everything is all right by definition. In other words, everything is model-dependent. However G. ‘t Hooft tries to make an impression that there may not be a clever theory, an impression that the present theory is good, ultimate and unique.

The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes, and that the coupling constants do not exactly correspond to the scattering amplitudes, should not be surprising.

I personally, as an engineering physicist, am really surprised – I am used to equations with real, physical parameters. To what do those parameters correspond then?

The interactions among particles have the effect of modifying masses and coupling strengths.” Here I am even more surprised! Who ordered this? I am used to independence of masses/charges from interactions. Even in relativistic case, the masses of constituents are unchanged and what depends on interactions is the total mass, which is calculable. Now his interaction is reportedly such that it changes masses and charges of constituents and this is OK. I am used to think that masses/charges were characteristics of interactions, and now I read that factually interactions modify interactions (or equations modify equations ;-)).

To convince me even more, G. ‘t Hooft says that this happens “when the dynamical laws of continuous systems, such as the equations for fields in a multi-dimensional world, are subject to the rules of Quantum Mechanics”, i.e., not in everyday situation. What is so special about continuous systems, etc.? I, on the contrary, think that this happens every time when a person is too self-confident and makes a stupidity, i.e., it may happen in every day situations. You have just to try it if you do not believe me. Thus, when G. ‘t Hooft talks me into accepting perturbative corrections to the fundamental constants, I wonder whether he’s checked his theory for stupidity (like the stupid self-induction effect) or not. I am afraid he hasn’t. Meanwhile the radiation reaction is different from the near-field reaction, so we make a mistake when take the latter into account. This is not a desirable effect , that is why it is removed by hand anyway.

But let us admit he managed to talk me into accepting the naturalness of perturbative corrections to the fundamental constants. Now I read: “that the infinite parts of these effects are somehow invisible”. Here I am so surprised that I am screaming. Even a quiet animal would scream after his words. Because if they are invisible, why was he talking me into accepting them?

Yes, they are very visible, and yes, it is we who should make them invisible and this is called renormalization. This is our feature. Thus, it is not “somehow”, but due to our active intervention in calculation results. And it works! To tell the truth, here I agree. If I take the liberty to modify something for my convenience, it will work without fail, believe me. But it would be better and more honest to call those corrections “unnecessary” if we subtract them.

How he justifies this our intervention in our theory results? He speaks of bare particles as if they existed. If the mass and charge terms do not correspond to physical particles, they correspond to bare particles and the whole Lagrangian is a Lagrangian of interacting bare particles. Congratulations, we have figured out bare particles from postulating their interactions! What an insight!

No, frankly, P. Dirac wrote his equations for physical particles and found that this interaction was wrong, that is why we have to remove the wrong part by the corresponding subtractions. No bare particles were in his theory project or in experiments. We cannot pretend to have guessed a correct interaction of the bare particles. If one is so insightful and super-powerful, then try to write a correct interaction of physical particles, – it is already about time.

Confrontation with experimental results demonstrated without doubt that these calculations indeed reflect the real world. In spite of these successes, however, renormalization theory was greeted with considerable skepticism. Critics observed that ”the infinities are just being swept under the rug”. This obviously had to be wrong; all agreements with experimental observations, according to some, had to be accidental.

That’s a proof from a Nobelist! It cannot be an accident! G. ‘t Hooft cannot provide a more serious argument than that. In other words, he insists that in a very limited set of renormalizable theories, our transformations of calculation results from the wrong to the right may be successful not by accident, but because these unavoidable-but-invisible stuff does exists in Nature. Then why not to go farther? With the same success we can advance such a weird interaction that the corresponding bare particles will have a dick on the forehead to cancel its weirdness and this shit will work, so what? Do they exist, those weird bare particles, in your opinion?

And he speaks of gauge invariance. Formerly it was a property of equations for physical particles and now it became a property of bare ones. Gauge invariance, relativistic invariance, locality, CPT, spin-statistics and all that are properties of bare particles, not of the real ones; let us face the truth if you take seriously our theory.

I like much better the interaction with counter-terms. First of all, it does not change the fundamental constants. Next, it shows imperfection of our “gauge” interaction – the counter-terms subtract the unnecessary contributions. Cutoff-dependence of counter-terms is much more natural and it shows that we are still unaware of a right interaction – we cannot write it down explicitly; at this stage of theory development we are still obliged to repair the calculation results perturbatively. In a clever theory, Lagrangian only contains unknown variables, not the solutions, but presently the counter-terms contain solution properties, in particular, the cutoff. The theory is still underdeveloped, it is clear.

No, this paper by G. ‘t Hooft is not original nor accurate, that’s my assessment.

answered Sep 8, 2014 by Vladimir Kalitvianski (102 points) [ no revision ]

 You are very surprised about effects visible already in ordinary quantum mechanics.

You quote ‘t Hooft saying

“The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes [...].”

 It is indeed as little surprising as that an anharmonic oscillator has a frequency (ground state energy divided by Planck's constant) different from the harmonic oscillator obtained by dropping the anharmonic terms in the oscillator. 

This is the precise 1+0-dimensional quantum mechanical analogue of what happens in 1+3-dimensional quantum field theory, where $p^2=m^2$ reduces to $\hbar\omega=p_0=p=m$. the mass renormalization counterterm is just the frequency shift due to the interaction.

The harmonic oscillator is in many contexts (e.g., quantum chemistry) a very convenient but fictional bare oscillator, just used to be able to formulate physical anharmonic oscillators (bond stretchs and bends) and to be able to do perturbation theory.

The only dimensionally induced difference is that in 1+0D and 1+1D, the mass renormalization is finite, while in 1+2D and 1+3D one needs a limiting process that if presented sloppily (as in many textbooks) leads to infinite mass renormalization.

You also write:

Let us consider his statement: “Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable”. It is rather strong to be true. An exaggeration without any proof.

But that renormalization is natural can be seen from the anharmonic oscillator, and a proof that renormalization counter terms diverge in 1+3D can be found in any QFT textbook, so that 't Hooft need not repeat it.

@RonMaimon: You wrote: "The mistake you think you identify, improperly taking radiative field effects into account, is as far as I can see simply nothing, because the near field ("self-induction", i.e. the electromagnetic mass) and the far-field (photons emitted) are not separated in diagrams in Feynman gauge. Neither "carries away energy momentum", because the calculations are for single particle self-mass, a particle moving without acceleration, so there is no radiation by symmetry."

We are speaking of different subjects. I on purpose speak of CED where one calculates the self-force, not the self-energy. It is a calculable physical effect. The CED calculation can be carried out exactly, so no perturbative inaccuracy is present. As well, in CED one can explicitly  distinguish the near field and the radiated field: the electric field $\mathbf{E}=\mathbf{E}_{near}+\mathbf{E}_{rad}$ and similarly for the magnetic field. In CED the "electromagnetic mass" $\delta m$ has the dimension of energy/c2, but it is an electrostatic (Coulomb) energy whatever the particle velocity is. Before taking into account the self-force, this energy "followed" the electron without problem, and after - it became unnecessary burden. We are trying to take into account the radiation reaction not because we have problems with description of this self-energy, but because the mechanical equation lacks a small "radiative friction" force. It is the self-force whom we calculate in CED, not self-energy.

In QED it is the same except we work perturbatively and with potentials, so this physics is obscured, but it is similar - an external field cannot accelerate a super heavy electron. You might think of a "particle moving without acceleration", but is is an illusion. Because as soon as you couple it with the electromagnetic degrees of freedom, the particle is not free anymore - it has the electromagnetic field in its equation even after renormalization. The mechanical equation is never "free" any more. What can be free is the center of mass of the whole system.

You might think that we cannot separate the radiated field from the near field in a general case, but I think it is possible. We have just to write different equations - each for each field.

Arnold, I understand your reasoning, but it is not what I am speaking of. Anharmonic oscillator energy/frequency is calculable from the parameters of the harmonic oscillator and the coupling constant in the anharmonic part. It is a regular calculation that does not change the original physical constants.

What I am speaking of is, for example, the following: in addition to the anharmonic part, we advance (by mistake) some other perturbation, say, $V_{special}$ and calculate its effect perturbatively. We see that its corrections are bad, but after discarding them the results are good. What do we infer from such an experience? We conclude that the $V_{special}$ is wrong. We should discard it from our equations or not to add, if possible. This is one point of view, and it is a correct one.

Another way of proceeding is to say that we must add also $-V_{special}$ to the purely harmonic oscillator (a counter-term), solve the corresponding equation exactly in order to be able to subtract the perturbative corrections of $V_{special}$ in each order. We might say that the bare oscillator is that who contains already $-V_{special}$. The result is equivalent to discarding - no trace of $V_{special}$ remains in the perturbative solution, but this point of view is wrong. It makes an impression of presence of $V_{special}$ in physics (since it is present in the bare oscillator and in the perturbation) whereas it is physically absent.

There is no point in preserving such a "calculation" with its "bare particle ideology" because, first, nothing is left from $V_{special}$; second, it forces us to make an extra technical work to subtract the harmful contributions in each order; third, it misleads us about physics.

In renormalizable theories $V_{special}$ may be simple, like a kinetic term, so we speak of mass renormalization, but generally it is so bad that we do not know what part should be subtracted and the theory remains non renormalizable.

''What I am speaking of is, for example,'' - But this is not what is done in QFT renormalization. one just does the same things that are done for an anharmonic oscillators, but since there are now infinitely many of them one must take an appropriate limit of theories with finite running coupling. The running coupling is needed to make theories at slightly different cutoffs give only slightly different physical results. Any hokuspokus beyond that is just poor presentation, not a relevant feature.

And it is irrelevant talking (to Ron Maimon) about CED in a context ('t Hooft) where only QED matters.

It is not irrelevant, but highly relevant. Tell me, Arnold, what do you think of CED self-induction effect, without referencing to QED. Isn't it a mistake? Let us finish with CED where this mistake is evident.

@RonMaimon: "Can you tell me what sense QED can fail to make sense ...?"

Yes, I can. For that, let me invoke a correcr CED as an example. As a good approximation to the correct CED, we can take a CED with $\dot{\mathbf{F}}_{ext}$ as a radiation reaction force. What does such a CED predict? The energy-momentum of the whole system is conserved (approximately with $\dot{\mathbf{F}}_{ext}$, but it is not essential at this moment). However, when we consider a couple of oppositely charged charges in a bound state, the radiated energy will be infinite in the end ( a collapse). To improve the agreement with experiment we are obliged to use QM which is a change of equations rather than fitting constants within CED. Similarly in QED, it does not describe all particles and in this respect it fails. But as a model, it can be formulated correctly, without any bare stuff/counter-terms.

It seems your gripe with renormalization is purely about aesthetics. If one disregards aesthetic presumptions, I don't think there is anything logically wrong, with the following minimalistic interpretation:

1) we have at hands some Lagrangian.

2)together with a unambiguously defined procedure called "renormalization", we can produce an algorithm that eats finite inputs and spits out finite outputs(modulo the asymptotic nature of the perturbation series).

3)such algorithm agrees excellently with experiments.

@JiaYiyang: You have a completely wrong impression. I speak of physical effects to describe. Nothing aesthetic.

Let me precise: we have at hands two Lagrangians, one for finding out the field when the field source \( j_{ext}\) is given:\(\mathcal{L}_{field} = -aF^2 + j_{ext}\cdot A\) and another for a particle in an external (known) field \(A_{ext}\): \(\mathcal{L}_{particle} = mv^2/2 + j\cdot A_{ext}\) (non relativistic kinetic term for simplicity). How to write down a Lagrangian in case when both the field and the particle variables are unknown? The mainstream answer is to use the interaction Lagrangian \(j\cdot A\) with both functions unknown. With analyzing a mechanical equation, one figures out that the main contribution of the new interaction is a self-induction effect. It is not desirable and its effects are subtracted. The latter operation can be implemented as an additional counter-term \(\mathcal{L}_{CT} = \delta m v^2/2\) to only leave the radiation reaction effect. So it is about how to describe physics. It is about how not to insert physically wrong interaction.

I know that subtractions work, but it is not physics.

@ArnoldNeumaier: "... in QFT renormalization... one just does the same things that are done for an anharmonic oscillators, but since there are now infinitely many of them one must take an appropriate limit of theories with finite running coupling."

What exactly do you mean by "anharmonic oscillators" in QFT? It is new for me.

Concerning regular perturbative calculations, it is like Taylor series, there is no renormalizations there, but different approximations to the exact things. Everything is expanded in powers of a small parameter, everything including the normalizing constants at the eigenfunctions, so what? It is a calculation and it is what we should practice. I have no objections to that.

What Arnold is talking about is the model of lattice scalar $\phi^4$ theory, where you make a lattice, and at each point, you have a field potential which is an anharmonic oscillator, so that $V(\phi) = \phi^2 + \lambda \phi^4$. If you turn off the derivative terms, so that the particle does not propagate, this is the Einstein solid---- there is one independent anharmonic oscillator at each lattice site. If you turn on the hopping, you get quartically interacting scalar field theory.

All field theory can be viewed as a collection of anharmonic oscillators in infinite dimensions. There is no surprise that the oscillation frequency of the single particle state is modified. This is old news.

An interacting field theory in the simplest conceivable case where space consists of a single point, so spacetime is the real line is identical to a single anharmonic oscillator. In this highly simplified theory you can understand the meaning of anything done in QFT without having to use more than well-understood and uncontroversial quantum mechanics. Then one can go to a finite lattice, which gives a collection of interacting anharmonic oscillators; still something understandable without getting any of the problems one has in a continuum QFT. Here one sees that, in order to get essentially the same physical results when halving the lattice spacing while keeping the lattice volume constant,  the coefficients of the interaction must change systematically (and for $d>1$ in a diverging way as the lattice spacing goes to zero). This fully explains the regularization dependence of the couplings, without any reference to more than ordinary QM.

't Hooft assumes implicitly that his readers have sucessfully understood this at some point of their physics education, or at least accept the results that come from there.

For an anharmonic oscillator given by $H=\omega_0 a^*a + V(a^*+a)$ the measured ground frequency is not the bare frequency $\omega_0$  figuring in the Hamiltonian (which in spectroscopy is just a parameter fitted to the data, or computed from an underlying more detailed model) but the frequency $\omega$ obtained as a pole in the Green's function. The two are related by an equation computable in perturbation theory, leading to a relation that can be solved for $\omega_0=f(\omega)$. If $\omega$ is taken from measurements, this determines the value of the bare mass.

This is the 1-point space version of saying that for a field theory given by a Lagrangian density, the measured mass is not the bare mass $m_0$  figuring in the action (which in QFT is just a parameter fitted to the data, or computed from an underlying more detailed model) but the mass $m$ obtained as a pole in the Green's function. For each value of the cutoff $\Lambda$ (needed to do perturbation theory),  the two are related by an equation computable in perturbation theory that can be solved for $m_0=f(\Lambda,m)$. If $m$ is taken from measurements, this determines the value of the bare mass as a function of the cutoff. Since the cutoff is unphysical, a physically correct theory must take the limit $\Lambda\to\infty$. This is called renormalized perturbation theory.  (It is completely irrelevant for the soundness of this procedure that $m_0$ has no limit when the cutoff is removed; what counts is that observable quantities have a limit.)

For QED handled in this way, everything fits the data to many more digits than any model in spectroscopy. We therefore conclude that QED (in its present, renormalized form - completely independent of CED, which only served as motivation to define QED) is a theory much more trustworthy than any spectroscopic model.

You may object that not everything comes out finite in QED due to infrared effects. But this is only true in calculations that treat the asymptotic electrons as being stripped from their electromagnetic fields, which is an unphysical procedure. if one takes instead S-matrix elements between dressed electron states which have a coherent electromagnetic field then they come out finite.

Thus if everything is done in a physically correct way, everything comes out finite and accurate (in perturbation theory, at all energies currently accessible to mankind), without QED needing any change at all. 

Only your understanding needs reformulation instead of renormalization.

@ArnoldNeumaier: You wrote: "An interacting field theory in the simplest conceivable case where space consists of a single point, so spacetime is the real line is identical to a single anharmonic oscillator. In this highly simplified theory you can understand the meaning of anything done in QFT without having to use more than well-understood and uncontroversial quantum mechanics.

Then one can go to a finite lattice, which gives a collection of interacting anharmonic oscillators; still something understandable without getting any of the problems one has in a continuum QFT. Here one sees that, in order to get essentially the same physical results when halving the lattice spacing while keeping the lattice volume constant,  the coefficients of the interaction must change systematically (and for d>1 in a diverging way as the lattice spacing goes to zero). This fully explains the regularization dependence of the couplings, without any reference to more than ordinary QM."

Thank you, Arnold, for your convincing explication. (I guess you are speaking of \(\varphi ^4\) theory, as Ron Maimon assumed.)

Indeed, in order to get essentially the same physical results when ... , the coefficients of the interaction must change. This is a killer argument.

Before this, I would read in textbooks something like in order to get physical results when the cutoff tends to infinity, the bare mass and charge must change. So we untie our hands and change them as we want.

Now I go farther. I write down an arbitrary function $f(x)$ and say that in order to get a desirable $g(y)$ from it, I must introduce a counter-term (or a bare function, whatever) $h_{CT}=g(y)-f(x)$ and add it to $f(x)$.

It works without fail, but is it a calculation? Isn't it manipulation with the calculation results? Do you think an accountant can untie his hands to that extent?

My strong position is the following: if you want to obtain an agreement with something, you change the equations, not the results of calculations. The changed results belong to different equations, find them, do not stick to the modification prescription.

Returning to $\varphi ^4$, you took a physically wrong interaction and you noticed that to get something reasonable, you had to modify the results with adjusting constants, functions, whatever. Than you say: "You see, it is how calculation must be done - with modifying the results in a certain way."

Sorry Arnold, if I, Vladimir Kalitvianski, would present an anomalous magnetic moment calculation in my pet theory and it would be quite different from the experimental data, no one would allow me to modify my result to have an excellent agreement. Everybody would say - your theory is nuts.

Your strong position is not really strong, only rigid - so rigid that arguing with you is like Sisyphus work, proved to be in vain whenever one thinks one just succeeded.

"I, Vladimir Kalitvianski, would present an anomalous magnetic moment calculation in my pet theory and it would be quite different from the experimental data, no one would allow me to modify my result to have an excellent agreement. Everybody would say - your theory is nuts.'' - True. but if you would present an an anomalous magnetic moment calculation where you modify your result only to match a given mass, ad miraculously this matching would result in an anomalous magnetic moment correct to 9 significant digits, it would be viewed as a reproduction of known results in a different way, and would give credit to your pet theory. Unfortunately, your pet theory cannot do that. Only this is why your theory is nuts. 

But QED can - it takes from experiment only four numbers - Plancks constant, the speed of light, the electron charge, and the electron mass, adjusts its parameters accordingly, then calculates lots of other predictions that can be compared with experiment, and - lo and behold - it reaches phenomenal agreement. This is why QED is highly respected by anyone but yourself.

It is like this in any scientific theory: one adjusts whatever freedom one has in one's theoretical model to a limited amount of data. Agreement of the theory with these data is then not very surprising (unless the number of parameters used for the fit is far smaller than the number of data fitted), but if it also accounts for lots of experimental data not used for the data fit, it is highly predictive and hence useful.

Sisyphus work is mine, I have asked all of you to express yourselves on CED mistake so many times and nobody responded so far.

As well, I have written many times in many places that the final QED results are good and should serve to reformulate the standard QED in better terms. You still do not understand me, but think you deal with a stupid geek.

Yes, my pet theory is nuts and I present it as a toy, not as a theory. I am not going to untie my hands to convert one function into another for the sake of agreement, it is not my way.

 You write:

''My strong position is the following: if you want to obtain an agreement with something, you change the equations, not the results of calculations. The changed results belong to different equations, find them, do not stick to the modification prescription.''

For an anharmonic oscillator, to obtain an agreement with the measured frequency $\omega$ according to your strong position you change the equation for the Hamiltonian, i.e., $\omega_0$, not the result of the calculation. But you need the calculation to know how to change it, since you need to ensure that the correct frequency $\omega$ results. This gives $\omega_0=f(\omega)$ There is no other way to find the right equation.

For a lattice at scale $h$, you change the bare mass $m_0$ of the action, not the result of the calculation. But you need the calculation to know how to change it, since you need to ensure that the correct mass $m$ results. There is no other way to find the right action. Since the calculated $m$ depends on both $h$ and $m_0$, the action will depend through $m_0$ on the mass $m$ and on the lattice spacing $h$ used. Thus the running bare mass follows from your strong position!

@ArnoldNeumaier: The freedom you use is due to introducing a bare mass value.

As I said previously, the harmonic oscillator contains a physical mass $m$, a physical strength $k$ in the oscillator Hamiltonian $H_{osc}=p^2/2m + kx^2$. An anharmonic part $\alpha x^3$ contains also a physically measurable constant $\alpha$; otherwise you would not know that you deal with an anharmonic oscillator. The rest is calculable from these input data. The exact $\omega$ is different from the oscillator one $\omega_0$, so what? There is no freedom here to use to fit something. For low amplitude oscillations, you deal mostly with the harmonic oscillator properties; for high amplitude oscillations, the anharmonicity becomes essential. Yes, you have to perform calculations assuming there is nothing but anharmonicity. It is assumptions, physical assumptions, about what you deal with. They are based on the previous experience when an oscillator equation was tried to fit the data. Once found, $m$ and $k$ do not change. Once found, $\alpha$ does not change.

If you want to start from an arbitrary Hamiltonian with unknown parameters, it is your right. I start from what was established for sure. I advance step by step, from CM to QM, from QM to QED, and every time I analyse my assumptions in order not to make an error in modeling. I change equations, not constants in them. And I see I can make mistakes with extrapolating the form of interaction Hamiltonian in a wrong way, so I speak of these errors. We make errors, Arnold. Interaction that modifies well established constants is erroneous. That is why we replace wrong results with good ones. We return to the good ones (mass and charge) with renormalization.

There is no insight in saying, ah, maybe the original constants are bare, but our interaction $\alpha x^3 + V_{special}$ is certainly correct. It is a rug to hide the shit.

Harmonic oscillators are used for different purposes. I am describing the use in spectroscopy. There oscillators are used to predict spectra from effective Hamiltonians. Nothing is known except the number of oscillators (stretches and bends), and that an arbitrary system of oscillators is described by a Hamiltonian in terms of creation and annihilation operators. Thus the setting I discussed above is unrealistic only in that I reduced it for simplicity to a single oscillator and a single parameter. In a more realistic setting you have perhaps 5 coupling constants (among them the bare frequency)  in an ansatz for the Hamiltonian to describe a spectrum of perhaps 100 spectral lines (frequencies). One can determine the 5 parameters by posing 5 renormalization conditions matching the lowest 5 frequencies. This determines the coupling constants, in particular the bare mass, and it can be quite different from the ground state frequency. Once the coupling constants are known, one can use them to predict the other 95 frequencies. 

The bare constants mean nothing; in particular, the bare frequency is just the frequency of a harmonic oscillator used to coordinatize the quantized symplectic space. Choose a different symplectic coordinate system related by a nonlinear canonical transformation and you get a  very different bare frequency for the very same physical system. (It is precisely this property that allows one to do spectroscopic fits with fairly simple Hamiltonians: The fit picks the particular coordinate system in which the Hamiltonian ansatz works best.)

The bare parameters generally change if one changes the model Hamiltonian but fits the same spectrum. If derived from ab initio methods (which sometimes is possible, as in an effective field theory), the bare constants are determined numerically, but still have no physical interpretation - they are just bare parameters that serve to explain the spectrum.

@ArnoldNeumaier: I agree with you here. No objections, on the contrary. But it concerns assumptions about a particular description (spectroscopy). And there, it is normalizing conditions, not the renormalizing ones, as I understand.

Another question is whether we can make errors in modeling or not. Are we always right?

QED is much more like my spectroscopic situation than your spring with known mass and stiffness coefficients. In the latter case you are entitled to say that there are definite parameters with a physical meaning since in your case these can be measured independently. But in the former case, you don't know precisely what is oscillating, and even less you know how to assign a mass and a stiffness that you could reliably measure. What is the mass of a stretching bond or a bending angle? That there is no independent way to measure a parameter makes this parameters bare, not an arbitrary declaration of bareness or physicalness. 

In QED, you also do not know what is oscillating. The field operators in the Hamiltonian have no physical meaning; e.g., the electron field describes electrons with charge but without an electromagnetic field. Such things do not exist in a way that one could measure their properties independently. This is why they are called bare. So something is oscillating, with some frequency that cannot be known a priori and cannot be assigned independently.

Moreover, what it taken as oscillating is a matter of personal choice in the same way as it is a personal choice to choose a coordinate system in which to describe something. The difference is that this coordinate system is in an infinite-dimensional space of fields. Any coordinate system that preserves the free field commutators is admissible and gives one view of the same physical situation. But of course, as Newton's physics becomes simple only in selected coordinate systems so a spectroscopic Hamiltonian or a quantum field Lagrangian becomes simple only in selected coordinate systems.

This (and only this) allows one to hope that there may be a special coordinate system in which the Lagrangian is fairly simple. We therefore make an anharmonic ansatz, hoping that it gives approximate answers in some fortunate coordinate system of this infinite-dimensional space - a coordinate system that we never know, but where we therefore also do not know what it describes. These coordinates are called the bare fields.

Now let us try to see how far it carries us. To make the ansatz sensible we borrow from classical electrodynamics (which should be some sort of macroscopic limit) a few principle; we cannot borrow everything as CED has no fermions. But we borrow the gauge principle and the form of the electromagnetic part of the Lagrangian, and we add all covariant and gauge invariant interactions up to the smallest nontrivial degree 3 (degree 2 implies a noninteracting theory), giving two coupling constants $m_0$ and $e_0$. (This corresponds to making the simplest anharmonic spectroscopic ansatz.) In order to make the local QED Lagrangian well-defined in the quantum case, where fields are only operator-valued distributions,  we regularize the Lagrangian by making it slightly nonlocal, introducing a cutoff scale $\Lambda$ to control the amount of nonlocality introduced. (Or we discretize and go to a lattice; this corresponds to taking $\lambda=h^{-1}$.)

Thus we have a family of theories parameterized by three constants $(m_0,e_0,\Lambda)$. This is a small number of parameters compared to the 5 parameters in an effective Hamiltonian for a spectrum of 100 lines; but still we can hope for at least approximate agreement with reality. 

To test for this we evaluate a number of key observables that can be matches with experiment. Among them are the physical mass $m$ (the smallest pole of the Green's function), the physical charge $e$, quantities like mass and lifetime of positronium, spectral lines of a bound electron in a central field, form factors and numbers computable from them (like the anomalous magnetic moment), and a host of other numbers. At least some of them should be reasonably matched if our 3-parameter theory has value.

We begin by fitting parameters to mass and charge, which gives two independent relations for our 3-parameter fitting problem, and find from our computations formulas $m_0=M(m,e,\Lambda)$ and $e_0=E(m,e,\Lambda)$. Plugging these into our calculations for the other quantities we find formulas of the form $F(m,e,\Lambda)$ for each of them. Comparing these to experiment, it turns out that they match better and better the larger $\Lambda$ is taken, and that we may in fact take the limit $\Lambda\to\infty$ without deterioriation of the quality of the approximation. However, and this is the truly remarkable thing, the results match for sufficiently large $\Lambda$ all known experimental data in the QED domain to an accuracy higher than known from any other application in physics! Which value of $\Lambda$ is taken is immaterial as long as it is large enough, since even the limit $\Lambda\to\infty$ works.

Thus by any reasonable standard our 3-parameter theory must be considered to be a phenomenal success, proving that the approach is at least as correct and true to reality as Newton's laws. 

Normalizing and renormalizing is the same, except one prefers the latter term in QFT. (However, Scharf uses normalization rather than renormalization in his book about the causal approach.)

Of course we can make errors in modeling. But the errors in modeling QED are relative errors of the order of $10^{-12}$ only, and have nothing to do with the renormalization issue.

@ArnoldNeumaier: In QED we know what is oscillating. If you take "Quantum Electrodynamics" by Berestetsky, Lifshits, Pitaevky, you will find 600 or more pages of good first order results. They use phenomenological mass and charge of the electron. The problems start when we calculate the self-action (self-induction), not before. It is at this moment when we "appeal" to bare parameters. We introduce them ourselves.

I know well that renormalization works in QED, but it is achieved by a too hight price. For example, you say, we don't know bare parameters, but the interaction term with self-induction is correct. We borrow the interaction Lagrangian form $j\cdot A$ from a clearly failed theory and encounter the same difficulties - unnecessary corrections to constants. And instead of recognizing wrongness of $j\cdot A$, we blame the original good equations (see 600 pages) for having in fact bare parameters. It is cheating and fooling onself.

You say the equation without self-action is for bare particle because physical particle self-induct. Very clever! But maybe those equations on those 600 pages are about physical particles who do not self-induct. Think of this possibility.

I agree that the real electron permanently interacts with the electromagnetic degrees of freedom and I for such a description. My fucking electronium and toy models are just about it. Look how they interact, how they are coupled.

@ArnoldNeumaier: I disagree. Self-induction phenomenon, imposed with $j\cdot A$, is a blunder!. All numerical and conceptual problems are due to presenting this blunder as a correct guess. That is why we invent stories about "bare" particles.

First order (tree) results don't tell what is oscillating in the quantum theory; they only tell what is oscillating in the semiclassical approximation. They don't give a recipe for calculating the parameters, since anything done on this level neglects the dependence on $\hbar$. But in QFT everything is extremely sensitive to $\hbar$; this is precisely the reason why everything must be renormalized.

Again one can see the effect of ignoring the $\hbar$-dependence from spectroscopy. The semiclassical tree level approximation has a spectrum consisting of the ground frequencies and its overtones and combination tones, as given by the Fourier transform of nonlinear functions of the phase space variables (corresponding to the transformation to action and angle variables). But the loop corrections change the ground frequency (and the overtones etc.).

Thus we can learn from the anharmonic oscillator both that one needs no renormalization at the tree level and that renormalization is necessary once one includes loop corrections.

Everything general about quantization can be learnt from the anharmonic oscillator, and the fields aspects are apparent there, too, if one looks at it in the Heisenberg picture.

QED borrows nothing from CED, except Poincare invariance and the gauge principle. Apart from that we just add low order terms compatible with these principles. The only cubic interaction you can form is the cubic interaction term of QED, without any reference to CED. After having it, one can try to give the term a meaning, and one finds that it is a current coupling. So what? Nothing is wrong with it in QED, as my story shows.

We borrow the interaction Lagrangian form j⋅A from a clearly failed theory and encounter the same difficulties - unnecessary corrections to constants.

In my story there are no difficulties, no corrections to constants at all. QED with cutoff has three constants, which are fitted to two observations, leaving a free parameter that can safely be put to an arbitrary large value or (in the customary limit) to infinity.

The 3-parameter model discussed there is good from the start to the end, and doesn't need any corrections anywhere. It is just like any other model in physics - you have a model with a few parameters, and you fit them to explain lots of data. Whether or not one calls the parameters or the particle or the fields bare (or whether one calls the ansatz a blunder and an invention, and the subsequent calculation cheating and fooling oneself) is completely irrelevant; the model is a model and can be used like any other model.

Whatever model you think should ultimately replace QED cannot be different on this level; it will still need to have a few parameters, and the only way to find their values is to do calculations that express measurable things in terms of these and then solving for them.

I had looked at your models (read them in detail a long time ago) and found them not worth much discussion. You handle a non-problem (arising from thinking about QED from an inappropriate classical perspective) at the cost of giving up the gauge principle and Poincare invariance, which are both essential for deriving the extremely successful QED as we have it.

@ArnoldNeumaier: I did not know that the tree-level results are insensitive to $\hbar$, but I do not want to deviate our discussion.

We borrow Poincaré invariance and gauge invariance from CED, but CED is inexact. Why do you call them principles then? I am not against, not at all; I am just underlying that we do not have a correct CED to borrow "principles". As well, the interaction term $j\cdot A$ is similar to the CED's one. Coincidence? No. You may say it follows from principles, but what principles? You don't have a theory yet to speak of principles. You try and you see several problems immediately due to this interaction.

Don't tell me that this interaction is unique. First, even in the standard QED it is accompanied with counter-terms. They are also relativistic- and gauge-invariant, but they have a different form.

Next, if you look at CED radiation reaction term (that is left after mass renormalization), it is also relativistic- and gauge-invariant, but still unphysical. So be carefull with "principles". Principles do not think for us, physicists. It is our duty to build theories. In CED we failed to conserve the energy-momentum with our "principles".

You blame my models for their lacking Poincare and gauge principles. But you yourself invent and use toy models to demonstrate your understanding, and those models are also lacking those principles.

What you insist on is a bare self-inducting and self-screening particle physics. You are a bare-particle physicist. However, a corrected interaction Lagrangian $\mathcal{L}_{int}=j\cdot A + \mathcal{L}_{CT}$ does not produce a self-induction nor self-screening. There are no those "effects" in reality.

Tree level results are even independent of $\hbar$ (if one scales by appropriate powers of $\hbar$ the fields [and, see the discussion below, the results, to make the units correct]) as they are all derivable from the classical action. The $\hbar$ expansion of quantum results is just the loop expansion - each loop produces a power $\hbar$.

In my story there are no counterterms. The counterterms are only needed if you do perturbation theory to calculate results. In a nonperturbative theory where we simply assume that God solves the equations for us, there are just the coupling constants $m_0$ and $e_0$ of the idealized theory and the parameter $\Lambda$ from the regularization scheme. Three finite parameters, no counterterms, no infinities, no dubious fudging anywhere. And talking about bare stuff is only words to connect to the literature, not a single formula depends on how you call this stuff.

My CED is the consistent classical electrodynamics of extended bodies, not the caricature with point particles that you call CED. The latter is pure fiction and inconsistent. This is why I am not interested in the difficulties you describe; they concern length scales where classical thinking is already physically meaningless. 

QED is unique once you assume Poincar\'e invariance and gauge invariance and at most cubic interactions. The first two assumptions are valid macroscopically, where the Maxwell equations have no problems. So assuming them is sound. The third assumption is a first guess (''minimal substitution'') and could be repaired in principle by adding higher order terms. But it turned out that they are not needed at currently accessible accuracies, as agreement with experiment is essentially perfect. This also shows that the assumptions are fully appropriate, more than in any other theory we have. It couldn't be better.

I am demonstrating with toy models features illustrating an established, very successful theory, making it more understandable. (At least for others. Knowing from past discussions that you are unwilling or uncapable to learn, I write these comments mainly for others who can see. But I still have a slight hope that you might begin one day to understand....)

You demonstrate with toy models features - of a theory for which there is no hint of existence except your imagination - that show the absence of a problem that exists only if one insists on the literal existence of point particles, which makes no sense in quantum field theory.

And please stop pinging me when you hit the reply link to one of my comments; it only produces annoying extra notifications each time you edit your comment. I get notified anyway of every reply to one of my contributions.

OK, sorry for pinging.

Two questions remain. You say a point particle CED has no meaning. Can you prove it? I mean, why do you descard any possibility to build a correct CED with the energy-momentum conservation? The radiation reaction exists physically for physical electrons, at experiments.

Another question: can you prove that in CED the interaction $j\cdot A$ does not contain a self-induction effect? In other words, how point particles are "irrelevant" to QED?

A point particle CED has no meaning: No physical meaning because it is well-known that classical physics stops being valid ad small length scales, so classical bodies of tiny extent exist only in the imagination. No mathematical meaning because it leads to well-known inconsistencies. 

Physical electrons may be regarded as classical particles only to an uncertainty of the order of the Compton length; so they don't qualify as classical point particles. 

Point particles are irrelevant to QED because it is a field theory. In spite of the established terminology, the particle picture makes sense only at length scales of the Compton length and larger, where the fermionic analogue of geometric optics applies. 

A point particle mechanical equation with the force derivative $\dot{\mathbf{F}}_{ext}$ makes a perfect sense. It describes the classical radiation and takes into account the radiation reaction force to a great extent. It is not as exact as we wanted, but it is sufficiently exact as a model and it is used. I believe we can build a correct CED for a point particle, i.e., CED with exact conservation laws.

About QED: consider a one electron state and analyse a one-loop correction. What "physics" does this loop describe? Not self-induction?

"Tree level results are even independent of $\hbar$". Probably we speak of different things because a Compton scattering depends on $\hbar$, a pair creation phenomenon does too, etc. Everything was first calculated on a tree level.

To get $\hbar$ independence at tree level, one must of course scale not only the fields but also the coupling constants and the results by the power of $\hbar$ needed to get the units correct. Apart from that, everything is at tree level independent of $\hbar$.  

For example, the energy levels of an anharmonic system with bare ground frequencies $\omega_i$ are given at tree level by $E/\hbar=\sum_i n_i \omega_i$, where $n_i$ is the occupation number of the frequency. Only the bare couplings appear, and there is no dependence on $\h$ in the right hand side. This is why one can regard at tree level the bare couplings as physical. 

But only at tree level! The scaled quantities such as $E/\hbar$ become dependent on $\hbar$ when loop corrections are included. These loop corrections also include factors that may become large when the number of ground frequencies becomes large (i.e., in QFT when the cutoff is increased), which makes them sensitive to the value of the cutoff if the bare parameters are held fixed. In the continuum limit (in a bare QFT without a cutoff) one has infinitely many frequencies  and in 4D the corrections become infinite if the bare parameters are taken independent of the coupling.

I felt we were meaning different things.

Even though you divide $E$ by $\hbar$, the fact of quantization of energy levels is due to QM, not due to classical mechanics.

The physics is only in the final results when all corrections at all orders are summed for a given process (defined by the collection of in- and out- lines). Only these final results are the content of the theory. Everything else is subjective and formalism-dependent bla bla. Indeed, since $\hbar$ is a universal constant, only the fully summed series makes physical sense if one uses highest standards of rigor. But at that level, QED is not yet well-defined. Fortunately, $\hbar$ is small (in the units where it reduces to the fine structure constant $\alpha$), and one can consider the theory at a given order and get good results. But again, only the partially summed series has a physical meaning. 

The term resulting at a given order of $\hbar$ (or $\alpha$) tells us how observable results depend on $\hbar$ (or $\alpha$), and this is their only meaning. But since we cannot vary  $\hbar$ (or $\alpha$) in an experiment, this dependence is purely fictional. Calling a fully summed 1-loop correction vacuum polarization or self-induction or anything else doesn't make it more physical. It is just a piece for the perturbative calculation of a physical quantity; whatever naming it gets just helps to remember the different low order families of diagrams. When one computes a desired effect on a lattice rather than perturbatively, one cannot separate the 1-loop effect from the tree effect but should still get essentially the same results if the lattice size is chosen large enough. How can it be real physics?

Treating the pictorial Feynman diagrams as if they describe something real is perhaps a slight teaching advantage when you begin to learn the formalism (and made the subject popular). But later it becomes a handicap that must be unlearnt.

I could not agree with you more, Arnold! Then let us consider a hypothetical situation when we managed to calculate exactly all loop contributions to a one electron state. Formally it is described with a $\tilde\Sigma$ term in the Dirac equation. The electron is no more free, but coupled to the electromagnetic degrees of freedom. Such a sigma-term in CED would describe the radiation reaction force exactly.

But before renormalization, it contains a self-induction force too, both in CED and in QED. You cannot get rid of it even for bare particles. For bare particles the bare mass "neutralizes" the self-induction effect numerically, but it is calculable as a self-induction effect, that's the problem.

Of course it is a quantum result; otherwise there would be no number states at all; I didn't claim otherwise.

It is called a semiclassical result. It is what you get if you apply the rules to extract physical information from the effective action to the classical action, which is the zero-order approximation in $\hbar$.

All information is taken from the classical action (and some quantum theory). This makes part of quantum theory interpretable in classical terms. In particular, in case of the energy levels, the formula has a clear classical interpretation: It is precisely what you get if you look at the classical frequencies of a generic nonlinear function of a harmonic function in its Fourier transform. And this is not an accident. There is an extended literature on many aspects of semiclassical computations.

Yes, semiclassical is classical with some quantisation conditions. And these conditions contain $\hbar$, that's the origin of my surprise. But we converge, I see.

What happened before the renormalization is completely irrelevant. It is bare of physical meaning in a double sense.

Complete information about a physical electron is obtainable through an exact summation at all orders of the renormalized 4-fermion leg diagrams of QED. It results in precisely what is encoded in the electron form factors. There are two of these: the electric form factor and the magnetic form factor.

For example, the exact electric form factor allows one (in principle) to calculate the exact Lamb shift, and the exact magnetic form factor allows one (in principle) to calculate the exact anomalous magnetic moment. In addition, together they tell everything about the response of the electron to an arbitrary external electromagnetic field.

All this is computable in terms of a modified Dirac equation that features both form factors as part of the equations of motion. This equation is in the literature; see the discussion in the entry ''Are electrons pointlike/structureless?'' of Chapter B2: Photons and Electrons of my theoretical physics FAQ.

Unfortunately, the expressions for the form factors are available only approximately, through perturbative calculations. At zero loop, the form factors are trivial and one obtains just the Dirac equation. For 1-loop results, see, e.g., Chapter 6.2 of Peskin & Schroeder. Of course, the results agree fairly well with experiment. The highly accurate computations of the anomalous magnetic moment (matching the experimental value to 12 digits of relative accuracy) depend on very complicated higher order calculations. 

"What happened before the renormalization is completely irrelevant. It is bare of physical meaning in a double sense."

Then there is nothing to discuss with you.

Then there is nothing to discuss with you.

But there is a lot to be learnt.

Before renormalization, one only has conundrums and seeming problems or inconsistencies. It is an endless time sink and groping in the dark.

After renormalization, one can read off (in principle) everything in QED, including all the things you are interested in. One can get complete dynamics for dressed objects behaving physically in every respect, and in excellent quantitative agreement with experiment.

And this with a model having just three adjustable parameters $m_0$, $e_0$, and $\Lambda$. One of them,  $\Lambda$, can be moved to infinity (which is the usual, though somewhat counterintuitive procedure as then the other two parameters also get infinite), reducing the true number of parameters to two only.

Alternatively, one can select $\Lambda$ arbitrarily but large enough, and gets a family of completely finite theories (one for each $\Lambda$) with correctly adjusted parameters  $m_0(m,e,\Lambda)$, and $e_0(m,e,\Lambda)$ that reproduce exactly mass $m$ and charge $e$ of the electron and nearly exactly - up to $O(\Lambda^{-1})$ - all other experimental results.

Nowhere else in physics has one such a good theoretical description of a very large class of effects.

Arnold, the question is simple: whether or not we insert a wrong interaction into good equations. You say the interaction is certainly good, but the parameters of theory are uncertain (?!). I say the interaction is bad (wrong physical effects are added) and we remove the wrong part by subtractions. You refuse to discuss the physics of the wrong interaction. You say it is irrelevant. OK, let us stop here.

QED is just a model with three parameters, whose resulting deductions agree in every respect extremely well with reality when the three parameters are correctly chosen. This proves that the model is right.

All discussion about trying to give classical labels (interaction, self-induction, etc.) to pieces of the whole is arbitrary (i.e., depending on how precisely it is done) and hence unphysical.

There is no ''interaction'' in a physical sense - this also belongs to the pre-renormalization stuff. Given in QED is a Lagrangian density with three parameters. This Lagrangian density can be split in many, many ways into a free part and a corresponding interaction. This shows that ''the interaction'' is a meaningless concept, as it depends on who is splitting the Lagragian how. In perturbative QFT one can split naively, taking the cubic part as the interaction, or with counterterms, adding some quadratic parts, too. Any arbitrary choice of counterterms defines an interaction - which one is the physically correct one? Neither!  In lattice regularization, there is not even a splitting!

First, this proves that the final results are valuable.

Next, your "arbitrary splitting" only "works" in your framework - with additional arbitrary  parameters. But you forget where the form of equations comes from. It comes from good physics before introducing an erroneus interaction. This physics made sense before. Do not fool yourself. Otherwise you would not even know what kind of equations to write. But let us stop here. We do not converge.

The form of the Lagrangian density comes solely from Poincare invariance, gauge invariance and simplicity. All physics is mathematically derived from this, and it makes perfect sense, as the agreement with experiment shows.

On the other hand, any interpretation of the QED Lagrangian density not justified by the results obtained from it is up to the reader; the mathematics and hence the physics does not depend on it in the slightest way. 

Introducing an interaction into the interpretation would presuppose introducing an unphysical notion of something noninteracting to which interactions are added. This something does not exist. Thus the interaction you introduce into the interpretation is erroneous. Do not fool yourself - it is your error, not an error in QED.

Our views do not converge, and never will - as long you insist on an interpretation of the terms in the QED Lagrangian density in terms of an objectively predetermined interaction with a physical interpretation inspired by classical electrodynamics. The quantum world is more fundamental than the classical world, which is only a macrosopic limit of the former.

@ArnoldNeumaier: You are misunderstanding what VK is asking for. He is not asking for something crazy, he is asking for something annoying, which is just that you use physical renormalization language throughout, the way they did in Dyson's time. He finds this philosophically comfortable, and insists that other views are philosophically wrong. Ok, fine. Just talk in physical renormalization always, meaning fix the mass and charge of the electron at the physical values, and then you can talk without misunderstanding.

First, he defines the "correct" noninteracting theory by the free theory of photons and electrons, the free theory which reproduces the in-states of an S-matrix, or, if you like, the free Hamiltonian whose one-particle eigenstates correspond to n free photons and m free electrons with the proper mass and charge.

Then he is stating that the interaction is "proper" when it doesn't do anything to these one-particle states, namely, that it doesn't change the mass of the electron, or the coefficient for producing a single particle from a field, or the charge of the electron (as measured by the long wavelength field).

Ok. No problem! In a regularized theory, it is really asking for nothing particularly deep: just add the interaction plus the appropriate counterterms built in to cancel the modification of the physical state mass, charge, and wavefunction coefficient.

The ordinary interactions plus the physical-renormalization counterterms constitute a Vladimir Kalitvianski fixed interaction term, and there is nothing wrong or different in this. It is just that he has pigheadedly decided that this is philosophically "the right way", and so refuses to acknowledge that you can do renormalization also using a floating subtraction point, or using nonphysical states as your reference. That this also works is clear, but he doesn't like it, because it doesn't use the physical states as a starting point, so it seems that it is talking about unphysical objects which can't be represented.

All this is completely uninteresting discussion over semantics. The results of doing VK renormalization is exactly the same as doing ordinary renormalization with physical renormalization (i.e. choosing the counterterms to fix the mass of the electron, and Z=1), each order in perturbation theory will end up exactly the same as in ordinary methods. It's simply philosophy, and there is no reason to bicker.

But VK further claims that once you do this, there are no more problems with QED. This is false, as the problems are not philosophical, but real. This is demonstrated in his renormalization scheme (the old-fasioned Dyson scheme) by resumming the leading logs. In other schemes, this is equivalent to RG flows to short distances. If you just translate everything to 1950s language, you will see that there is no real dispute here, despite the incendiary language and wild-sounding claims. It's just demanding that you use old-fasioned terms for everything, because he just hasn't acclimated himself to 1970s language yet.

@ArnoldNeumaier: Regarding the statement "The particle picture only makes sense at scales larger than the Compton wavelength", it is something that one needs to clarify. In perturbation theory, the particle picture makes sense in Schwinger parameter to describe the scattering, this is Feynman's main insight. But the nonperturbative definition of a particle formalism seemed opaque for a long time. It is very hard to make sense of the notion of a space-time localized photon outside of perturbation theory.

But it is possible. This is an advantage of stochastic quantization I only appreciated recently. Once you are in stochastic quantization, you have a nonperturbative particle picture which works, but at the cost that there is an unphysical stochastic time, which plays the role of the internal proper time of the particles. Except in stochastic quantization, this time is ticking universally for all particles. This is why stochastic quantization is so nifty, it solves the particle formalism issue once and for all. This is not something people talk about, but it is important to clarify. I asked and answered about this here: http://physicsoverflow.org/23546/is-there-nonperturbative-point-particle-formalism-for-qft

I am happy that at least Ron sees that I do not deny renormalization results. I consider the final results valuable and correct to a great extent. But with them the "bare-particle physics" comes along. I consider it misleading and harmful. Because one thing is to get the calculation results right and here you can use any formalism at your choice, but another thing is to build correctly a theory and here I see we do not understand the physics yet. We are obliged to use counter-terms because our initial "interaction" gives some bad corrections (together with good ones). It contains that $V_{special}$ I was talking above. And renormalization is a way to remove the effects of $V_{special}$ from the calculation results. When we understand the physics correctly, we will not introduce $V_{special}$ in our interaction and we will not fight with its harmful corrections. That is why I insist on our physical theory being still underdeveloped.

About arbitrarily splitting the total Hamiltonian: it may be arbitrary only in case of the full problem being physically and mathematically sensible, for example, like here, when applying numerical methods gives an exact solution, which exists for sure. For the perturbation theory, however, it is essential to choose the initial approximation as close to the exact solution as possible. Then the perturbative corrections are small and the perturbation series is practical - truncated, it may well satisfy our accuracy requirements.

Now, in QED and other QFT we do not know whether the problem has physically and mathematically sensible solutions. We are trying this interaction and that interaction and analyze the results. This situation is quite different from a well posed problem. In this case we are guided by already good equations giving good results, but which are missing some small physical effects. Our initial approximation is based on them, they cannot be arbitrary when a theory is under development. If we could subtract the counter-terms in the Lagrangian exactly, we would be left with just physical small interaction missing in the initial approximation. Unfortunately we cannot subtract them exactly so we do not deal with a physical interaction, but with its ersatz. Promoting our temporary way of calculation to "guiding principles" is too early at this stage. What I see is an inadmissible dogmatisation  of renormalization in physics (it is "unavoidable") and renormalization methods. It does not help develop theory in physical terms, to say the least.

There is one more reason to be careful in choosing an initial approximation properly. I am afraid that we always deal with some sort of quasi-particles, collective excitations, so in one regime it is better to choose these quasi-particles, in another regime some others. It is like a difference between Born approximation (nearly free particles) and strongly correlated regime where the nearly free collective excitations are quite different.

(By the way, Ron, I would like to discuss QED "failure" at high $p$ in another thread.)

Yeah, ok, I figured out what you are asking for, it's nothing deep, and it's nothing important. It really is not what bothers people about renormalization. You are just rediscovering extremely old fasioned perturbation theory for yourself (adding some interesting pet methods to it) and then ignoring the language everyone else uses. There is no value in ignoring the language everyone else uses, it is equivalent up to useless philosophical biases.

Your philosophical biases just make you more comfortable doing renormalization when the interaction is chosen so that the physical states are not altered by the interaction. Big deal. Use physical renormalization conditions. You can figure out the counterterms order by order. It's not different and it's not important. It doesn't change anything about the goodness of the equations (the perturbative series is still no good by itself in defining the theory). The failure of renormalization is a failure of the theory as a whole, not a failure of any order of perturbation theory.

The "failure at high p" is discussed everywhere in the literature, except it's disguised from you by using language you don't like! It's the same exact thing as the running coupling constant. The running coupling is still there in your preferred scheme, except now it's a shorthand for expressing the resummation of an infinite series of leading logs. The renormalization group is simply the act of noticing that instead of having a bunch of log corrections at higher and higher order at fixed high p, you can instead rewrite the perturbation series with the logs recentered at energy scale $\Lambda$ instead of at energy scale $m_e$, so that the logs that appear are $\log(|p|/\Lambda)$ and not $\log(|p|/m_e)$.

How do we do a leading log summation? The logarithm terms have an energy inside, so they need a denominator. The physical renormalization subtracts at the mass of the electron, so as to fix the electron mass. But you can purely mathematically subtract somewhere else. The subtraction point freedom shifts the leading logarithms, recentering the logs so that they are zero at another point, instead of at $m_e$. Mathematically, it is a resumming of the leading logs of the series, or alternatively and equivalently, it is a coupling constant which is a function of the energy scale.

The series with a subtraction point of $\Lambda$ makes it that the leading order scattering at center of mass energy $|p|=\Lambda$ has the smallest possible corrections,  the resummation can be absorbed into a change in $\alpha$, and writing the exact same series again with a new $\alpha$. That's all that the "running coupling" means, it doesn't mean anything else.

Just because you haven't gotten used to it, you can't force the rest of the world to use your language. There is no advantage aside from some gains in intuition to use physical renormalization. The problems are the same in any scheme, because all the schemes are perturbatively equivalent, that's the content of the Stueckelberg-Peterman, Gell-Mann Low, or Callen-Symanzik papers, which you should read before making more asinine philosophical complaints.

@RonMaimon: No, you misunderstood me. I do not distinguish the old and the new renormalization approaches technically. Any of them is good as long as we do not have a better physical formulation. I distinguish them conceptually. The old one clearly shows that we do not know how to formulate a reasonable physical theory. We just do not have the right interaction term. But if we could luckily subtract the conter-terms from $j\cdot A$ and expressed the "right" subtraction result in terms of unknown variables $\mathcal{L}_{int}^{right}$, it would not be the end of the story! If you read carefully my papers, you will see that we need a better initial approximation too. You missed that point completely. I am not for a particular renormalization scheme, I am for a better physical formulation. In simple terms it means that a part of the right interaction $\mathcal{L}_{int}^{right}$ mentioned above should be added (included in) to the initial approximation. A special split of $\mathcal{L}_{int}^{right}$, if you like. It means working with "dressed" particles from the very beginning. This is a reformulation. In a reformulated theory you cannot encounter IR problem at all. All this was demonstrated on the Hydrogen atom description, on the electronium model, and in my toy model dealing with sticks and ropes. Too bad nobody pays attention to this.

Concerning QED failure, let us start another Chat thread.

We can subtract the counterterms exactly in 3d. While we can't write down the closed form expression for the "correct interaction" (physically renormalized interaction) in renormalized QED, I can tell you what it is:

$j\cdot A - A \bar\psi\psi - B F^2 - C \bar\psi \gamma\cdot{\partial}\psi$

This is your "sensible interaction", with appropriate constants A, B, C determined numerically on a lattice as a function of the lattice size a, or order by order in perturbations as a function of the cutoff energy $\Lambda$, or more or less exactly (asymptotically exactly) in superrenormalizable theories in 3d. The condition that the interaction "does as little as possible to the free theory" is simply the physical renormalization condition, and it fixes A,B,C uniquely at any a or any $\Lambda$.

Your quest to find the correct interaction is just to find A,B,C precisely in the continuum limit, where there is no a or $\Lambda$. This quest is totally quixotic because as far as we can see there is no continuum limit. In those cases where there is a continuum limit, superrenormalizable 3d theories or asymptotically gauge theory, we can write down good enough formulas for the values of A,B,C at short cutoffs to do the calculations completely, so we know a good enough approximation to the correct perturbation.

But you don't have to be so careful. The problems in renormalization don't change if you take such care to minimize the impact of the perturbation, it doesn't help to do this, except for some infrared questions.

No, what you wrote is not the "correct interaction". You wrote $j\cdot A$ with counter-terms. Everybody, including me, knows it. The coefficients A, B, and C depend on $\Lambda$ whereas $j\cdot A$ does not. You wrote an ersatz, which is useful for practical calculations and is useless for theory development. And you severely underestimate the importance of IR problem resolving by choosing a better initial approximation. That is why your lattice calculations are not decisive. You just work with a bad formulation.

Your idea of QED being inconsistent mathematically demoralizes you and you are not motivated in searching for better formulation. I do not share your skepticism. We can discuss the QED failures elsewhere.

I am not demoralized! I am very moralized. There is a cutoff dependent coefficient in front of j\cdot A too, I just forgot to write it down.

There is nothing you can do about this interaction, this is the correct interaction on the lattice, and it has all the properties you demand--- it fixes the physical mass and charge, it does nothing to single particle states (photons/electrons), and there is nothing to change about it, because it is a perfectly reasonable interaction term on the lattice or in a Pauli-Villars regulator, or wherever.

The question of taking a continuum limit, getting rid of the cutoff, is entirely separate from formulating the interaction, because the continuum limit only exists if there is a second order phase transition with appropriate properties on the lattice. This transition is nearly certainly not there for the theory of pure QED, from simulations, thinking, and perturbative calculations within the standard theory.

But there is no demonstration that there is no second order transition in the same theory with monopoles, and there are strong indications from Argyres Douglas points that it is possible to make an nontrivial continuum interacting theory with both monopoles and charges, but these examples are only for cases where we have some analytic understanding. You shouldn't need any analytic understanding, we can simulate all these things on a computer, and check to see if the appropriate transition is there.

And there are cutoff-dependent contributions "inside" $j\cdot A$, visible only while calculation, that is why you cannot subtract the counter-terms exactly within this ersatz. You just have no idea what you deal with. There is a lot to do with the physical theory formulation. It may look similar, but it is different. The meaning of things is different. If you think there is nothing to do, then this is just because of lack of your experience. You have been chewing for too long the same chewing gum.

You think the computer simulation gives anyway a model-independent result as if there may not be other models. Yes, the results are model-dependent. Not only that, the results are formulation-dependent within the same model, so your claim, following from simulations in a certain problem setup, is not objective.

No, you don't have "cutoff dependent contributions inside $j\cdot A$". I finally understood your calculations. You just didn't impose gauge invariance, so you felt you have a free choice of $\alpha(k)$, but you don't. Requiring that the thing is gauge invariant fixes the momentum mixing for the A and electron. Without gauge invariance, you don't have consistent coupling to A, your results will depend on your gauge choice.

Moving the interaction around like you did is ultimately accomplished in path integrals with quadratic term rotations. This is not at all the source of the problems in ultraviolet divergences, but your analysis is interesting anyway, maybe even useful.

Your formulation might be interesting for polarons, surface modes in solids, or any other case where you have a system interacting with oscillators, and the interaction is more or less arbitrary. But you really need to make the derivation much shorter and clearer, and switch to a Lagrangian formulation, where your rewrite is just a much simpler change of variables.

Ron, the potential energy (or Hamiltonian) $-e\mathbf{r}\mathbf{E}$ is gauge-invariant, isn't it? There is no $A$ here.

The particle momentum is not gauge invariant, because of the transformation of the wavefunction.

$$ \psi(x)\rightarrow e^{ie\phi(x)} \psi(x) $$

$$ A \rightarrow A + \nabla \phi $$

Only $(p-eA)^2$ interaction is gauge invariant modification of the nonrelativistic Hamiltonian. If you introduce interactions using only E, you don't get the proper gauge invariant Hamiltonian. You have disguised this in your formalism somewhat by calling the E the position and the A the momentum, so that now the field "positions" are gauge invariant, but whatever, you still have to check that the Hamiltonian is gauge invariant (it isn't unless you use the normal interaction, that's the principle that fixes $p-eA$).

The gauge invariance condition is stringent, because it modifies the wavefunction for the electron. It's not by purely coupling to E, because the electron wavefunction is altered under gauge transformations. This is why your formalism is well suited for condensed matter applications, phonons, where this issue is absent.

With mentioning rE, I just wanted to say that there are may be different combinations involved in the theory. Some of them are explicitly gauge-invariant, some transform with other things to produce a gauge-invariant combination.

In fact, the gauge invariance means you can transform the wave function and do not transform the potential, and the physical results will be the same. Similarly, you can transform the potential and leave the wave function untransformed. The latter is widely used in practice when we start from a particular gauge. What you wrote is the equation form-invariance transformation.

The previous comment is incorrect, gauge invariance always means transform the vector-potential and the wave function at the same time. If you do either one separately, it doesn't work. What I wrote is the only form, there is no other form. The confusion you have here is ultimately due to the confusion I explain below.

Regarding your paper, there is a central mistake in "Toward Correct QED". The previous sections are probably 100% correct, as they deal with momentum cross-coupling of mechanical oscillators and in this case, linear momentum coupling, the transformation you introduced is clear and it works. It's a canonical rotation, you are diagonalizing the p terms, and it indeed works to give a much better initial approximation to perturbation theory, everything you say is 100% right.

But you failed to notice that the QED interaction term is not at all of this form. Your interaction term is not properly localized at the location of the electron!

This is something that is very strange when you take an oscillator point of view, or if you took a class in radiation physics, so I will explain (I remember that I got confused on this when learning nonrelativistic QED). What you did is you looked at the term $(p-eA)$, and decided this has the form "particle momentum minus field momentum", and therefore it can be diagonalized in field/particle variables by making joint variables using your diagonalization process.  You did not notice that it's $p-A(X)$, the $X$ inside the parentheses is a quantum position operator. The interaction is not "momentum minus momentum", it's "momentum minus momentum evaluated at the position of the electron". The vector potential is being evaluated at an operator location!

The actual expression for this is very complicated and nonlinear in the position operator $X$. If you expand the field in modes:

$$ A(X) = \sum_{k,\epsilon}  \epsilon^\mu \alpha_k e^{ikX} + h.c.$$

where $\alpha_k$ is a creation or annihilation operator for a photon, and the X is inside the exponential. You get the matrix elements for the electron transitions for a photon at k by exponentiating ikX.

But you might have noticed that people hardly ever talk about exponentiating X in atomic calculations of photon emission! Why don't they?

What people do in practice is to expand the exponential in powers of $X_p$, and ignore all but the constant term. This is physically justified when X is localized near a fixed point, say the origin, with small deviations (like in an atom), and the k's are all for much much longer wavelengths than the atomic radius.

Then the $p\cdot A(0) $ term is the leading approximation (replace the exponential with the first Taylor coefficient 1, i.e. consider the vector potential at the location of the atomic nucleus, rather than the position of the electron). The interaction with a photon is then just $p\cdot \epsilon \alpha_{\epsilon,k}$, the atomic system just emits dipole radiation at the wavelength $\lambda$ according to the dot product of the momentum and the polarization. This is an approximation which fails already at the X-ray wavelengths which are comparable to the atomic radius, a much smaller cutoff than the Compton wavelengths you are considering as your cutoff, let alone the insane cutoffs where relativistic perturbation theory goes bad.

The next approximation beyond the dipole approximation (the dipole approximation for photon emission was introduced by Heisenberg in 1925, by the way, simultaneous with the development of quantum mechanics), just adds the next Taylor coefficient of expanding the exponential, $-ip\cdot \epsilon k \cdot x$, and the next adds the next Taylor coefficient (there is also the $A^2$ term to expand in exactly the same way, for two photon emissions).  The dipole case is for when the atom is interacting with the field at the origin only, and this is fine when the k of the photon is not varying appreciably with the location of the atom.

The proper expansion of the exponential produces an infinite series of ever more horrendous terms for emission and absorption of photons with various transition elements between atomic/free-electron states. This is the real interaction between nonrelativistic electrons and photons, it's actual QED. This is just not of the form you can diagonalize. The relativistic perturbation theory shows you that even though you can't diagonalize it in field variables, nonetheless you can renormalize it.

Except for the long-radiation wavelength limit. In this case, where the dipole approximation is correct, and all other radiative transitions are suppressed, it is the case that the field oscillators are coupled in a simple momentum - momentum way, and you can do a diagonalization of the sort that you did.

This accidental property, true only at extremely long distances, is why you thought you had fixed QED, and also why you thought you could see how the radiation works--- your transformations are likely valid in regimes where the photon momentum is very, very low. This is also why it might be useful in infrared QED, to deal with infrared divergences. But here, it is possibly (likely) already known, although I don't know the literature well enough to say for sure.

Your extrapolation to relativistic QED is equally incorrect, again making the same mistake, of coupling the photon to the electron with no regard to where the electron is.  The diagonalization you do is correct for coupled oscillators, it is just not true that the nonrelativistic electron is coupled to photons like coupled oscillators.

I will write this in a review, but I don't want to be harsh. You have an idea of diagonalizing systems which is useful in other cases. I should also point out that the analogs of your method, diagonalizing the momentum, is known within field theory under another name, but it only works for special types of interaction: quadratic terms or background fields. The interaction between photon and electron is not of this sort when you treat the photons as oscillators (as opposed to classical background).

Ron, take your time, I do not diagonalize something. I advance a diagonalized thing as an ansatz in a complicated case. It was just an idea and my development stopped at this point. I just wanted to show that one can construct something different and more reasonable. I never had time to finish my study.

By the way, reviews are not about this particular article. And if you want to discuss my papers, let us open another thread for it. This one is already too long.

Ok, I don't need to take my time anymore, I didn't understand anything you were doing before outside the simple models, and you wrote a lot about the simple models, so I got stuck there, but I get it completely now. It's an interesting idea, but it's not so new, people often do things like this in Lagrangian language, the proper analog is choosing the quadratic piece in a Lagrangian to make a best fit to the interacting problem. For your case of fixed electron number QED, this can help with soft-radiation.

The articles you gave for review with their models are (probably, I didn't check every line) totally fine, the models you give are correct, they get an improvement from your procedure, the improvement you see is real, it is from rotating kinetic terms, it happens.

The analog for nonrelativistic (or relativistic) QED doesn't exist because the interaction is not approximated well as any kind of kinetic term mixing except for extremely long wavelength photon emission, where you can ignore the size of the things that make the photons, in short, infrared physics. If you look at the short wavelength electron-photon interaction, a mixing of modes of your sort cannot and will not ever simplify the zeroeth order. I understand everything now.

I should point out that the complete lack of effect is made obvious in physical renormalization, where if you use physical counterterms, the interaction does nothing to the physical mass and charge, there is no change at all to the quadratic part of the "true free Lagrangian". Did you really think that people spent three decades studying something that can be fixed by a mode rotation?

The improvements you do are also understood and incorporated into modern perturbation theory. They are all geometric series resummation. This is something you do all the time in perturbation series work, and it is understood as modifying the quadratic initial approximation. But it never gets rid of the interaction, because the interaction is not quadratic. It also never makes the interactions less divergent, nor can it ever give a better starting approximation to electron structure at short distances, because the quadratic approximation at short distances is already best-possible. Such a rejiggered initial approximation can be used to include extremely soft-radiation compared to any scale in the problem, because there the modes of the EM field are coupled to the electrons in a dipole approximation, i.e. momentum minus momentum, and that's amenable to mode-mode rotation.

The interaction is not anywhere approximated better by a different quadratic piece, except to the small extent that the coupling runs, which can be thought of as the correct version of your procedure applied by resumming geometrically the leading logs into corrections of alpha. This changes the quadratic piece slowly to make a best-fit to scattering at energy p.

You need to understand that your methods are understood, and incorporated into modern perturbation theory, they are not changing the conclusions, and they can't change the conclusions.

+ 0 like - 2 dislike

"Introducing an interaction into the interpretation would presuppose introducing an unphysical notion of something noninteracting to which interactions are added. This something does not exist. Thus the interaction you introduce into the interpretation is erroneous. Do not fool yourself - it is your error, not an error in QED."

(taken from here)

This "something" physically exists - always together with the right interaction. It can be approximately described by an equation for the center of mass or for an average position due to averaging. The latter equation is an equation where you do not see this interaction. Such an equation is physical, but approximate. That is why all equations in external fields work well. As simple as that.

answered Sep 11, 2014 by Vladimir Kalitvianski (102 points) [ revision history ]
edited Sep 11, 2014 by Arnold Neumaier
Most voted comments show all comments

It doesn't exist in QED, and it is only QED that I am discussing.

It might exist in your attempted approximate formulations but these are too far from being realistic to be taken seriously. They cannot compete with QED,  the best physical theory we have.

I had read all your papers. It was enough to lose interest, as you consider ad hoc interactions, prove something about these, and use your results as a justification for denouncing QED, which is (and was already in 1949) far stronger than all you did. 

Your approach is doomed because you are not following up - the slightest advance is already too difficult -, and nobody will do it in your place. 

At another place you excused yourself by complaining that you are alone whereas the physics community worked on QED for many years. But you are now (though having access to all the insight of the physics community) still far less advanced than the physics community was already in 1949.

In 1949, the progress was achieved independently by three different people (Tomonaga, Feynman, Schwinger), which shows that it could be achieved by single people with a moderate amount of work, if they only asked the right questions and tried to answer them! The fact that three very different independently found approaches were soon found to be equivalent (by Dyson) proved that QED is the correct way to go. This wouldn't have happened if any of the approaches had been a crutch.

You seem to have endless time to defend your criticism and your ideas but no time to advance your ideas into something that would answer even the questions answered in 1949. Because it is a dead end, not because one would need more time - no amount of time would make your method fruitful for QED. It may be fruitful for toy problems, but who cares? 

Rather than guess new equations that lack all of the basic principles proved to be correct through QED, you should - if you want to contribute to a new QED - take the regularized equations of QED, solve these by perturbation around one of your conjectured improved starting points (if you can find one), and show that one can get answers with less work than before. 

It's about time to read carefully what I propose.

The most careful reading doesn't help. In  http://arxiv.org/pdf/0811.4416.pdf you propose an equation (60) as the '' trial relativistic Hamiltonian of the Novel QED'' and praise it with the attributes ''It replaces the wrong “minimal coupling” (self-action) ansatz ...  the problems of IR and UV divergences do not exist in the Novel QED thanks to using the notion of electronium. No bare constants are introduced, no renormalization is necessary, and there is no such a feature as the Landau pole''. This is all true (and indeed you didn't introduce new parameters) but your marvellous new QED doesn't make any correct predictions! At least not any you advertise.

You reformulate a very successful theory (Old Honorable QED) into one (Novel QED) that is completely useless, at least as far as your powers to exploit your marvels are concerned. But you feel encouraged by your failure to reproduce known results to suggest 

that the other “gauge” field theories should be reformulated in the same way

If your scientific ethos displayed in that paper (throw away old theories just because someone has ideas that he was not able to develop into a calculus where one can make some testable prediction)  were followed, theoretical particle physics would be dead soon.

Perturbation theory is very well developed and poses not a single challenge. If nevertheless the necessary perturbative calculations for establishing predictions from your Novel QED are so horrendous that you cannot perform them in your spare time within a period (2008-2014) far longer than the year (1947-48) the fathers of QED needed for theirs (filling only a few pages in a journal or textbook), your Novel QED is worthless since too difficult to use.

Novelty alone is not a quality sign. You need to convince others that what you do is better in a serious respect - nobody will do it for you; if you don't, your ideas are doomed. 

I was saying that Vladimir denounces QED (e.g., by calling the very coupling wrong that guarantees gauge invariance). The newest version of  his reformulation paper ([v13] from Tue, 20 May 2014) still says exactly the same statements I just quoted from the original 2008 version, including the ''wrong coupling'' of QED. Compared to 2008, there is some additional bla bla in the appendix but not the slightest supporting perturbation calculation. But the abstract promises 

For example,  ... in QED ... it means obtaining the energy corrections (the Lamb shift, the anomalous magnetic moment) quite straightforwardly and without renormalizations.

Empty talk, pure wishful thinking. If it were straightforward, he would long have done it as an exercise.

The 13th revision of the paper then wouldn't contain the classical prelude but would start by presenting the novel Hamiltonian with a 1-page motivation, followed by 24 pages of details of the perturbative calculation (more than enough to give complete detail), and a conclusion that every reader can check by comparing the predictions with the textbooks. 

For comparison, the original 1949 paper (11 pages only) computes the Lamb shift quite straightforwardly and with renormalization, ready to check for anyone with access to a library:

N.M. Kroll and W.E. Lamb, Jr., On the Self-Energy of a Bound Electron, Phys. Rev. 75 (1949), 388

The feedback you get here consistently is that you take your mouth far too full concerning QED and renormalization in QFT, while your ideas might have some validity for simpler problems and in the classical domain (where you actually provide substantial support through your derivations). But in the realm of QED you demonstrated nothing at all but still make big claims.

Except in your personal views that convinced no one, there is nothing at all wrong with modern perturbative QED if presented correctly (e.g., within causal perturbation theory or with a fix huge but finite cutoff) - it is fully consistent without any infinities, and fully predictive. (The doubts about existence at very large energies or coupling are of a completely different nature and don't affect any of the testable predictions.) 

Your grounds for claiming the opposite are based on nothing but wishful thinking, nourished by grand extrapolation from simple toy problems. Toy problems are useful to illustrate more complex existing theories, but they are not a scientifically acceptable way of making claims about nonexisting or not yet existing theories. Extrapolation from ideas successful in simple cases may be a sensible guide for choosing a research topic, but it ruins your reputation if you use it to make claims that everyone except you finds unsupported by the demonstrated evidence. The evidence must be on the real thing, not on the toy example, before you can begin to modestly make claims.

Most recent comments show all comments

@ArnoldNeumaier, what do you mean by denouncing QED, I thought it works rather well when applied inside its domain of validity?

To be fair, people were breaking their head on the problem of QED from 1930 to 1949, before it was solved, and Feynman and Schwinger both spent a lot longer than a year on this. Stueckelberg proposed the renormalization program in 1941, Bethe started thinking about it in 1947, so the comparison is not fair. When it was one person working alone (Stueckelberg from 1934 to 1947), nobody got anywhere. So blaming his rate of progress is not particularly apropos.

Grand extrapolation is natural when you think you have a new idea that nobody else knows. In this case, it is best to understand exactly what VK has done. Reading the paper you linked, this time I finally understood exactly what calculations he is doing, and it is really weird and not totally off base for case of particle coupled to continuum oscillators (for example, a polaron, electron coupled to phonons). But his "new interaction" has an arbitrary choice in it $\alpha(k)$, and this choice reflects the scale of the vector potential entering the kinetic term.

There are two possibilities, either his choice is equivalent to adding p-eA, or else his choice is not gauge invariant. I can determine which it is and write a review for this paper.

The intuitions are fine for polarons, but really irrelevant to QED (really, VK's condition that the interaction is not shifting the one-particle properties, is, as far as the ultraviolet renormalization is concerned, nothing more than a physical renormalization condition--- you can implement this perturbatively easily. He isn't implementing this as a physical renormalization condition, he isn't doing ultraviolet renormalization at all. Rather he is figuring out the long-wavelength nonrelativistic radiation production from a slowly shaken nonrelativistic electron, in the case where the field and electron degrees of freedom are mixed properly from the start).

He is using a Hamiltonian formalism (no covariant perturbation theory, so forget about pair-creation or reproducing modern perturbation theory, it would be a nightmare) just to rewrite the nonrelativistic one-quantum-particle system coupled to QED (cut off at the mass of the electron) in different variables, rediagonalizing the kinetic terms. I didn't know you could do this, and perhaps you can't--- the transformation is strange in the infrared modes, and perhaps doesn't reproduce QED coupling. But this transformation moves the coupling to the electromagnetic modes from the momentum of the particle to the position of the particle, and then the produced radiation is from the shaking of the particle from the action of the external potential directly, instead of from an interaction where the particle shakes the radiative field. It is really the exact quantum analog of the rewrite of the radiation reaction as the derivative of the force, and it is perhaps useful for the infrared issues in QED, or for polaron descriptions. It's irrelevant for renormalization (which is still just fine as always).

I don't understand how the heck he has a free choice in his rewrite--- the $\alpha_k$ are (he claims) a free choice, but they aren't. These are the scale of the vector potential, but the vector potential scale in the p term is determined from minimal coupling.

To VK: you need to check gauge invariance.

$$\psi(x) \rightarrow e^{i\phi(x)} \psi(x)$$

$$A(x) \rightarrow A(x) + \nabla \phi  $$

Needs to leave the equation invariant. This determines your $\alpha(k)$, and probably not at the value you chose for them. I can't vouch for the accuracy of your method at all, I just finally got exactly what you are doing. You wrote it pretty unclearly, using all sorts of annoying conventions.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOverfl$\varnothing$w
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...