Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

206 submissions , 164 unreviewed
5,103 questions , 2,249 unanswered
5,355 answers , 22,794 comments
1,470 users with positive rep
820 active unimported users
More ...

  Physics of asymptotic series and resummation

+ 7 like - 0 dislike
12887 views

Asymptotic divergent series occur in physics all the time, especially when we are doing perturbation theory. I've been reading about such series and their resummation in physics, following questions such as this and this, also notes from Marino (pdf) and an interesting paper (pdf) about using Pade approximants.

Many of the answers to the questions I'm asking are probably contained in the set of notes above / can be found online, but being unfamiliar I'd still appreciate insights from others. Also apologies if some of the questions don't make sense.

1) Well, I guess the first question is, how do we know that a series (for eg from perturbation theory) result we get in physics is an asymptotic series? After all, to be precise, we can only say a series is asymptotic to a function, and not asymptotic by itself (that statement is meaningless). Then, if indeed a series is asymptotic, we know that taking a finite order truncation of the series will give us good approximations to the function in some limit.

The problem is that it means physically you need to know the function of the quantity you are calculating to begin with, in order to say that a finite order truncation is a good approximation to that. But that just is begging the question!

For example, I can get a series answer for the ground state energy of the SHO with a quartic term in a small parameter (which I can presumably show is divergent with zero radius of convergence). But how do I know a priori that taking just finite terms in the expansion does bring me close to the actual answer?

2) Is there a general estimate I can do to know how close I am to the answer if I take only the first N terms in the perturbative expansion?

3) I see a lot of literature on the subject of divergent series in physics just stating: let's try to get a finite number out of this divergent series. This could be using the method of Borel summation, and Pade approximants etc. But this is akin to saying 'shut up and just calculate'. My question is: why do these summation methods give sensible physical results at all?

After all, resummation of a divergent series is a mathematical tool, in general there is no unique way of resumming, so on what grounds is say Borel summation more physical than other forms of summation? 

As an example, the regularized sum \(\sum_{n = 1}^{\infty} n = -\frac{1}{12}\)is well known and used in physics, but why is that particular form of regularization physical?

I think the bottom line with my questions is that I fully accept that divergent series occur in physics all the time, and quite clearly they contain information that we can extract, but I would like to understand more to what degree can we trust those results.

Thanks.

asked Feb 1, 2015 in Theoretical Physics by nervxxx (210 points) [ no revision ]

Let me try to clarify the point that, I think, is confusing you (sorry, I know it's a little repetitive). In the LIMIT of g going to zero, one DOES KNOW the amplitude A that one is trying to compute, because one can COMPUTE it in that LIMIT (not when g is very small but finite):\(\lim _{g\to 0} A=\lim _{g\to 0}\sum_{n=0}^{\infty} (g/2\pi)^n\,(2n-1)!!\)  This is a STRICT EQUALITY in my view. Do you agree so far? Now, since g is small but finite, one wonders about the convergence of the series in the RHS. And one sees that is not absolutely convergent, but one can prove that is asymptotically convergent to A around g=0. 

The key point is that A is computable (and unique!) when g is infinitesimally close to 0. And this is all one needs to prove that \(\sum_{n=0}^{N} (g/2\pi)^n\,(2n-1)!!\) is asymptotic to A around g=0 for all N.

Edit: Crossed out because the analogy is not good enough. See my answer below for an analogy.

Note that the case of Ei(x) when x goes to infty is different because one doesn't know (at least not in the method you showed) an equality btw Ei(x) and a series (there is an additional integral) even when x goes to infty. In this regard, it's more similar the problem of finding an approximation to Ei(x) when x goes to zero. Here one can COMPUTE, same as in QFT or QM, an equality btw Ei(x) and a series (in the limit), so that subsequently  one can study the convergence of this series (it turns out to be absolutely and asymptotically convergent, but the point is that the starting point is the same: equality btw the object one can approximate (Ei or A) and a series in the relevant limit). 

4 Answers

+ 4 like - 0 dislike

But wait! you just pulled the coefficients of the second series out of a hat, wherelse the coefficients of the first series was generated through some procedure. And that procedure guarantees... ? [I don't know how to continue])

(Edited after discussion with drake to avoid misunderstandings).

If you speak of finite (fixed) $N$ in the expansion of a given function $A(x)$, then such $A(x)$ may have many asymptotic series/functions $\tilde{A}(x)$ at a point, say, at $x=0$, because the definition of "asymptoticity" $A(x)-\tilde{A}(x)\to 0, \; x\to 0$ may be satisfied with many functions/series. If the asymptotic function $\tilde{A}(x)$ is a power series too, one can specify to what "degree" it is asymptotic to $A(x)$ at $x\to 0$, i.e., how many correct terms $\tilde{A}(x)$ reproduces. Your invented series in the round bracket for $\tilde{\rm{Ei}}(x)\approx e^{-x}\left(x^{-1}-x^{-2}+2!^2x^{-3}+...\right)$ is asymptotic to the second order in powers of $x^{-1}$. The other power terms are different from those of $\rm{Ei}$, and the question now is to what extent your approximation may be useful to "represent" $\rm{Ei}$ at finite $x$, if your $\tilde{\rm{Ei}}$ is a simple analytical formula.

An infinite series obtained from expansion of $A(x)$ may be unique, but not very practical. Note, even a convergent series like $e^{-x} \approx 1-x+x^2/2-...$ (a polynomial) satisfies the asymptotic condition, but for finite $x$ a truncated series may become rather inexact. So, another function, like a Padé approximation $e^{-x} \approx 1/(1+x+x^2/2+...)$, may be closer to $A(x)$ at a finite $x$ (more exact) than the polynomial itself. The approximation above uses not only several correct terms of the function expansion, but also an extra information that the original function is decaying monotonously.

If you do not have any information about $A(x)$ and cannot estimate its highest power terms ($N>>1$), then you cannot say a priori whether its truncated series approximates the function well enough at a finite $x$. Convergent and "divergent" infinite series ($N\to\infty$) both satisfy the asymptotic condition anyway, but in practical calculations a truncation and finiteness of $x$ are also involved. Thus, some additional work may be needed to obtain a satisfactory approximation. All additional information about $A(x)$ may be useful to construct an analytical approximation to $A(x)$ at finite $x$. Sometimes, analytical approximations may only reproduce a small number of the series $A(x)$, but nevertheless they may be rather practical.

Let me consider, as an example, the following integral and its expansions:

$I(g)=\int_{-\infty}^{+\infty}e^{-x^2-gx^4}dx, $

$I_{g\to0}\approx \sum_{k=0}^{\infty}\; (-g)^k \frac{\Gamma(2k+1/2)}{k!}\approx 1.772(1-0.75g+3.281g^2-27.07g^3+...),$

$I_{g\to\infty}\approx\frac{1}{2g^{1/4}} \sum_{k=0}^{\infty}\; \left(\frac{-1}{\sqrt{g}} \right)^k\frac{\Gamma(k/2+1/4)}{k!}\approx \frac{1.813}{g^{1/4}}(1-0.338/\sqrt{g}+0.125/g+...).$

An analytical approximation $\tilde{I}(g)=\sqrt{\pi}\left(\frac{1+1.689\sqrt{g}}{1+1.689\sqrt{g}+3g+1.543g^{3/2}}\right)^{1/4}$ uses two correct asymptotic terms at $g\to 0$ and two correct terms at $g\to \infty$. Its precision with respect to $I(g)$ is better than 1.2% (Fig. 1).

                                                            Fig. 1.

P.S. It's maybe somewhat off-topic here, but sometimes one can obtain an expansion of the searched function $A(x)$ at another point too, say, at infinity. This is an additional information about $A(x)$ which can be used for practical estimations of different approximations to $A(x)$. I wrote a preprint about it in 1992 (in Russian). It is available at http://fishers-in-the-snow.blogspot.fr/2014/01/blog-post_3.html . It contains some polemics with a ПМС (PMS) method, but it is useful nevertheless.

answered Feb 3, 2015 by Vladimir Kalitvianski (102 points) [ revision history ]
edited Feb 6, 2015 by Vladimir Kalitvianski
Most voted comments show all comments

If it is the relative difference that has to go to zero, then you are right. I kept in mind possible presence of addenda like $ae^{-1/x}$ that do not leave power series. Also there may be approximating functions that are impossible to represent as a power series to compare term by term. But if we limit ourselves to polynomials, then your are right, of course.

I used a power series as an example, but doesn't need to be a power series. It's just the definition of asymptotic approximation. In particular, the asymptotic expansion of Ei(x) when x goes to infty is NOT a power series, as a comment on my answer shows.

Yes, my statement is only valid for a finite $N$, not for $N\to\infty$ if we compare two series. But we can compare two functions: $A(x)$ and $\tilde{A}(x)$ without expanding them. Their difference or their relative difference is not always reduced to some $x^{N+1}$ or to $x$, $x\to 0$, it seems to me. But I may be wrong.

Even though we have an explicit infinite series for $A(x)$, summing it up exactly is not always possible. We may use, for example, Padé approximations of a finite order which are non linear functions of $x$. Amongst many Padé approximations of a given order $N$, there are those, which are "better" then some others at finite $x$ although asymptotically they are equivalent to the same order. I mean, for practical purposes, we always deal with finite $N$ and thus have a big variety of approximations differing at finite $x$. It is especially important for studying/estimating the numerical values of $A(x)$ in the regime of "strong coupling", for example.

@VladimirKalitvianski, I know it has been a long time since you posted this answer, but could you expand on how you used the two asymptotic series to create the approximation $\tilde{I}(g)$?

I noticed that $I(g)$ decreases slowly at $g\to\infty$ ($I\propto g^{-1/4}$) and that the expansion parameter is more $\sqrt{g}$ rather than $g$. So I tried to build an approximation for $I^4$ in form of a rational fraction $\tilde{I^4}\propto\frac{1+a\sqrt{g}}{1+b\sqrt{g}+c(\sqrt{g})^2+d(\sqrt{g})^3}$, and by multiplying the corresponding expansions of $I^4$ by the denominator I obtained algebraic equations for the coefficients $a,b,c,d$. Voilà.

P.S. You may try a similar approach to the function $I^2$, if you like to practice.

Most recent comments show all comments
In your 1st paragraph you are referring to an *asymptotic approximation of order N*. I think that the uniqueness the OP was talking about refers to an *asymptotic approximation for all N*.
@drake: I did not mean a finite $N$ solely (although in practice we deal with finite $N$ due to impossibility to calculate all terms). My statement is general, i.e., it is valid for any "asymptotic approximation for all $N$".
+ 4 like - 0 dislike

In general, if you just have a power series and know nothing about its origin, one cannot say anything about its precise value - it is mathematically ambiguous. Note that any power series $\sum_{n=0}^\infty a_n x^n$ is asymptotic to many different functions; indeed, for any sequence of numbers $b_n\ge 1+n!|a_n|$, the sum

$$f(x):=\sum_{n=0}^\infty a_n x^n (1-\exp(-x^2/b_n))$$

is absolutely convergent for all real $x$ and has the required power series expansion. 

However, in practice, power series don't simply fall from heaven but have a context in which they are interpreted. In many cases the context defines the function whose power series expansion is given. In this case, one knows in an abstract sense the function it represents, and hence can talk meaningfully about errors and whether it is an asymptotic series. More - in many case one can use analytic techniques to derive properties about this function, not only the asymptotic series  but some qualitative information. If this information is good enough it can be used to reconstruct uniquely the function from the asymptotic series by a process called (re)summation.

A method for resummation consists of an algorithm for associating to certain classes of series in $x$ a related convergent series or sequence that, under certain assumptions about a function $f(x)$ to which the series is asymptotic, either has the limit $f(x)$ or has a limit from which $f(x)$ can be reconstructed by a closed formula (e.g., an integral). Thus the algorithm comes together with a proof that under precisely stated assumptions, $f(x)$ is uniquely determined by its asymptotic expansion and agrees with the alternative expression. These assumptions are usually based on complex analysis and assert that $f$ is analytic or meromorphic in certain regions).

For example, while one cannot say anything in general about the nonconvergent sum $\sum_{n=1}^{\infty} (-1)^nn$, one can consider the asymptotic series $\sum_{n=1}^{\infty} n x^n$, which reduces for $x=-1$ to the above sum. Assuming that the function it represents is analytic in a domain containing $[-1,0]$, and knowing that the series is convergent for $|x|<1$ to $f(x)=x/(1-x)^2$, which is an easy exercise(differentiate the geometric series and multiply by $x$), one can deduce $f(-1)=-1/4$. This is the correct value under the stated assumptions. Thus in any context where these assumptions are known to be satisfied, one is allowed to use this method of resummation.

There are a handful of different assumptions under which one can derive such resummation methods. (In addition to what you mentioned there are, e.g., Borel summation and zeta function regularization.) In many cases where several of these methods are applicable, they lead all to the same result. This makes the methods also applicable as a heuristics in cases where the assumptions haven't been verified theoretically.

In particular, this is frequently done in physics, where the level of rigor is anyway much lower than in mathematics, and in numerical algorithms, where the results are inaccurate anyway. Of course, being a heuristics means that there is always the possibility that the result is wrong, but this hasn't detracted people from using summation methods with great success in many situations where rigorous results are hard to get.

answered Apr 5, 2015 by Arnold Neumaier (15,787 points) [ revision history ]
edited Apr 6, 2015 by Arnold Neumaier
+ 4 like - 1 dislike

1)  Because the asymptotic "convergence" is relative to a POINT. This point is often g = 0 in perturbation theory. And around that point (g = 0) you can bound the difference between the function and the truncated expansion at given order by the the first omitted term. 

Asymptotic series (around a point): Convergence around that SINGLE point for fixed order (number of terms).

Convergent series: Convergence in the limit of infinite series for ALL fixed points.

2) A QFT amplitude has the form

 \[A\sim\sum_{n=0}^{\infty} a_n=\sum_{n=0}^{\infty} (g/2\pi)^n\,(2n-1)!!\]

Because the factorial grows very rapidly, the series does not converge (absolutely). However, it does converge asymptotically around g = 0. The condition is:

 \[\lim_{g\to 0} \frac{A-\sum_{n=0}^{N}a_n}{a_N} = 0\]

Because of the limit (which should actually be $g\to 0^+$), we can replace the numerator with the dominant term, so that 

\[\lim_{g\to 0} \frac{A-\sum_{n=0}^{N}a_n}{a_N} = \frac{a_{N+1}}{a_N}=(2N+1)\,(g/2\pi) =0\]

Now, asymptotic convergence around g = 0 implies that for a finite \(g = g_0\)  (different from 0), beyond certain truncation N, the relative error gets worse. This N is given by the condition 

\[\frac{A-\sum_{n=0}^{N}a_n}{a_N} < 1\]

which translates in \(N \sim \pi/g\) . In QED (g=1/137), this is \(N\sim 430\), whereas the experimental accuracy is restricted to N = 4 .

Edit: I misunderstood the question: if you take the first N terms, the estimate of your error is given by the N+1 term, as long as \(N < \pi/g\)

Toy integral analogy:

Consider the integral:

\[I(g)=\int _0^{\infty}\frac{e^{-t}}{1+g\,t}\, dt\]

where g is non-negative. 

For \(g\to 0^+\) we can replace the denominator by its power series expansion and integrate term by term, getting:

\[\lim _{g\to 0^+}\, I(g) = \sum _{n=0}^{\infty}\, (-1)^nn!\, g^n\]

The series does not converge because of the factorial, but due to the previous equality, the series \[\sum _{n=0}^{N}\, (-1)^nn!\, g^n\] is asymptotic to I(g) for \(g\to 0^+\)  for all N.

The reason for the lack of convergence of the series \[\sum _{n=0}^{\infty}\, (-1)^nn!\, g^n\] is that the expansion in power series of \(\frac{1}{1+g\,t}\) is only convergent for t lower than 1/g (honest question: what's the analogous in QFT??). Another way to see this, it's that for negative g, the integrand has a non-integrable singularity at t equal to -1/g, so that the integral isn't defined. This is essentially Dyson's argument for the nonconvergence of series in QFT (from Wikipedia):  if the series were convergent in a disk of non-null radius centered at g=0, it should obviously include negative values for the coupling constant g. But if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is 'sick' for any negative value of the coupling constant, the theory doesn;t exist and the series cannot converge, but is an asymptotic series.

3) I cannot improve this link: https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-continuation/

answered Feb 1, 2015 by drake (885 points) [ revision history ]
edited Feb 7, 2015 by drake
Most voted comments show all comments

Thank you for your answer. However I feel like you did not address my questions exactly. Yes, asymptotic convergence is relative to a point, but that's not the point here (forgive the pun) - after all, we all know that we are looking at \( g = 0\) (I just didn't state it in my question explicitly). But asymptotic series must also make reference to some function in order for it to be called asymptotic. 

This is embodied in the formula you wrote for the definition of an asymptotic series: the series is asymptotic to \(A\) as \(g \to 0 \) if \(\lim_{g \to 0} \frac{A - \sum_{n=1}^{N} a_n }{a_N} = 0\) for all \(N\). But this condition requires knowledge of your unknown function \(A\) so how can you know the series is asymptotic to \(A\) in the first place? In your estimate for the number of terms to optimally keep in point 2), you have assumed that the series is indeed an asymptotic series to \(A\). I'm asking if the assumption is justified.

Let me give an example: the exponential integrate \(\text{Ei}(x) \equiv \int_{x}^{\infty} \frac{e^{-t} }{t} dt \) has an asymptotic expansion as \(x \to \infty\) in the form \(\text{Ei}(x) \sim e^{-x} (\frac{1}{x} - \frac{1}{x^2} + \frac{2!}{x^3} + \cdots)\). One can show the RHS is an asymptotic series to the LHS because one can write \(\text{Ei}(x) - S_N(x)\) (\(S_N\) is the partial sum of the asymptotic series) as an integral and bound it as \(N! e^{-x}/ x^{N+1}\). Then dividing by \(a_N\) and taking the limit which is 0 shows it is asymptotic. However, the checking of the definition relied on the fact that I had an integral form of \(\text{Ei}(x)\) to begin with. Now the point I'm trying to make here is that, in physics problems, we don't know have prior knowledge of what \(\text{Ei}(x)\) is. Suppose I give you the series \(e^{-x} (\frac{(1)^2}{x} - \frac{(1^2)}{x^2} + \frac{(2!)^2}{x^3} + \cdots)\) and ask, is it asymptotic to \(\text{Ei}(x)\) as \(x \to \infty\) (and you not knowing the integral form of \(\text{Ei}(x)\))? You wouldn't know!

Thus it feels like when we do physics and use perturbative theory, we always implicitly assume that the series we get is automatically asymptotic to our apriori unknown function (Green's function, energy etc.) as \(g \to 0\). I guess in some cases you can compute your desired function non-perturbatively (some other representation of it) and then show that the perturbative series is asymptotic, but if one has the non-perturbative solution at hand already then there is no need to do perturbation theory in that particular problem anyway.

With regard to 3) and Terry Tao's exposition (which is brilliant mathematically but I would like some more physical arguments), I am being led to the picture that: regularized summation is physically useful because the regulator can potentially be understood as arising from some yet unknown physical process (damping higher modes etc.), and that different choices of the regulator are unimportant and will give universal results if the regulators and series obey nice enough properties. Would that be a reasonable understanding of extracting something physically out of a divergent series?

So what is confusing me is this: I come up with two series. Series 1: \(e^{-x} (\frac{1}{x} - \frac{1}{x^2} + \frac{2!}{x^3} + \cdots)\)and Series 2: \(e^{-x} (\frac{(1)^2}{x} - \frac{(1^2)}{x^2} + \frac{(2!)^2}{x^3} + \cdots)\).

I ask the question, is Series 1 asymptotic to \(\text{Ei}(x)\)which is defined to be the exponential integral (I'm going to drop using the phrase in the limit x to infinity because it always applies) ? Is Series 2 asymptotic to \(\text{Ei}(x)\)? Now assuming \(\text{Ei}(x)\)has an asymptotic expansion, it must be unique (see link). So the only outcomes are that 1) S1 is asymptotic but not S2, 2) S2 is asymptotic but not S1, 3) S1 and S2 are both not asymptotic.

What I find puzzling about your method is that you say you only need to look at the first ommitted term in the formal amplitude series in order to answer the question. Well, i know the formal series for both; I take the first omitted term for each series and see that one is bounded by \(N!e^{-x}/x^{N+1}\)while the other by \((N!)^2e^{-x}/x^{N+1}\). In both cases for each fixed N the quantities both vanish as x goes to infinity.

You would therefore conclude (if I understand your assertion correctly) that both series are asymptotic to \(\text{Ei}(x)\), a contradiction. 

So, read as it is, I don't understand your method of showing asymptoticity. 

(Your counter argument would probably begin like: But wait! you just pulled the coefficients of the second series out of a hat, wherelse the coefficients of the first series was generated through some procedure. And that procedure guarantees... ? [I don't know how to continue])

(Also sorry if it might seem like I am being finicky, and if you think what you are saying is obvious and I am just not getting it, please say)

Yeah, that would be my reply. No worries you are not being finicky.
@drake: The Euler integral must start from zero rather than from $g$. It's a typo, I guess.
@VladimirKalitvianski Thanks! It's not Euler's integral. And there is  a typo: the lower limit is zero. You are 100% right! Thanks.
Most recent comments show all comments

Your concerns are fair. Indeed \(Ei(x) - S_N(x)\)  can be expressed for all x as an integral by integrating by parts \(Ei(x)\) N times. This integral is bounded from above by  \(N!\, e^{-x}/x^{N+1}\)  FOR ALL x , as you claim. But for  \(x\to \infty\) (NOT for all x), this bound can be found by looking at the first omitted term in  \(S_N(x)\) .  Indeed, \(S_N(x) =\sum_{n=0}^{N-1}\, b_n(x)= e^{-x}\,\sum_{n=0}^{N-1} (-1)^n\, n!/x^{n+1}\) , then (the absolute value of) the  first omitted term is \(|b_N(x)|\), which is indeed the bound you wrote. This is my understanding, but I might be wrong. Let me know if you don't agree.

Well, I'm not sure what your analysis is showing. What happens next after you bound \(|b_N(x)|\)?

+ 3 like - 0 dislike

1)...But how do I know a priori that taking just finite terms in the expansion does bring me close to the actual answer?

2) Is there a general estimate I can do to know how close I am to the answer if I take only the first N terms in the perturbative expansion?

I think, drake gave a formal estimation of accuracy of a given series $S_N$ as $O(g^{N+1})$ or as the next (discarded) term, which is good in practice only for small $g$. For finite $g$, one has to have more numerical information about the expanded function to compare with. Truncation of a series and using finite values of $g$ are two extra things that may change the way of the accuracy estimation. A third thing is our tolerance $\epsilon$. We are never interested in absolute accuracy of our approximation, so we choose some tolerance within which we may try different approximations.

Sometimes we can "improve convergence" of a given series and thus construct something more suitable for practical application. Let me give here some thoughts I had about 30 years ago, when I was young researcher and dealt with series like $I(g)$ from a toy integral by drake.

Looking at a series like $I(g)=\sum (-1)^n n! g^n$ and seeing fast growing coefficients, I was first convinced with those "asymptotic" reasonings that are usually given in textbooks and in other sources. But, as I said, truncation, finiteness of $g$, and tolerance $\epsilon$ changed slightly my mind from "theoretical" (mathematical) understanding to something what I would call a constructive approach. I noticed that even a convergent series like $e^{-x}\approx 1-x+x^2/2 -x^3/6+...$, when truncated, grows in magnitude as the highest power of $x$ of the polynomial, so it becomes bad anyway. I concluded that it was because we used powers of $x$, which were small for small values of $x$ and big for $x\gt 1$. I thought that if we would use some functions $f(x)$ less growing  for $x\gt 1$, the "convergence" of a truncated series would be better. The function $f(x)$ must behave as $x$ for small $x$, but grow slower when $x\to\infty$. To illustrate the usefulness of my approach, I propose to consider the toy integral expansion to the third power of the small parameter and compare the approximation accuracies for three cases. I denote the toy integral as $E(x)$ because I used $I(g)$ in my first answer for another integral. So, $E(x)\approx1-x+2x^2-6x^3+...$. Fig. 1 shows how "practical" different polynomial approximations are:

                                                                     Fig. 1.

Then I choose, instead of $x$, another function to expand $E(x)$, namely $Y(x)=\ln(1+x)$. I re-expand $E(x)$ in powers of $Y(x)$ and obtain another polynomial: $E(x)\approx 1-Y+1.5Y^2-(25/6)Y^3+...$. Note, the series coefficients became smaller and the numerical value of $Y$ is smaller than $x$, so the next (discarded) term is smaller too. It means a better practical accuracy of the new series. Fig. 2 shows it:

                                                                       Fig. 2.

Just for another comparison, I choose one more function, namely $Z(x)=x/(1+x)$. It grows even slower than $Y(x)$ for big $x$. Fig. 3 shows the improved accuracy of the series in powers of $Z$:

                                                                     Fig. 3.

Within a given tolerance $\epsilon$, the "convergence radius" may effectively be increased.

Thus, sometimes we can improve the series convergence with re-expanding the searched function in powers of less growing functions. Truncated series at finite $x$ are of different accuracy although asymptotically ($x\to 0$) all such series are equivalent. Qualitatively it can be understood as due to smaller discarded term - it has a smaller coefficient and the value of $Y$ or $Z$ is smaller too at a given finite $x$. We can go on with choosing different $f(x)$ to make the discarded terms even smaller and, therefore, improving the "quality"of our new series. (When $f(x)=1-E(x)$, the new series is just a linear in $f$ function coinciding with $E(x)$.) See some other examples here.

I did not publish my ideas and I share them with you just to demonstrate that there is still a room for creativity even in "well established and well understood" subjects.

answered Feb 8, 2015 by Vladimir Kalitvianski (102 points) [ revision history ]
edited Feb 13, 2015 by Vladimir Kalitvianski

Congratulation, you just rediscovered renormalization! This is precisely about reexpressing a meaningless sum as a series of better behaved terms.

In your case you have a function $E(x)$ and write it as a function $E(x)=F(Y(x))$, with nice $Y(x)$ and convergent $E(x)$. Renormalization in quantum field theory is conceptually only a small step away from it. You have a function $E(g_0,\Lambda)$ (where $g_0$ is called the bare coupling and $\Lambda$ the cutoff) and write it as a function 

$$E(g_0,\Lambda)=F(Y(g_0,\Lambda),\Lambda)=F(g,\Lambda),$$

where $g:=Y(g_0,\Lambda)$ is called the renormalized coupling, and regard everything as a function of the better behave $g$ rather than $g_0$; thus one can now forget about $g_0$. The traditional renormalization recipes guarantee that $F(g,\Lambda)$ is well-behaved and has a good limit for $\Lambda\to\infty$, i.e, when the cutoff is removed. Therefore this is taken as the definition of the renormalized theory with physical coupling $g$. 

Arnold, I feel that "a small step away" is not that small in my eyes.

As I mentioned in the main text, a divergent series may be divergent not only due to "growing" coefficients at small terms $x^n$, but also in case of "small" coefficients and big values of $x^n$ when $x>>1$. In this case the series cannot be used directly, but needs summing up. In QED it is the case for the infrared corrections. Fortunately, they can be summed up in a finite formula due to simplicity of the original series.

Concerning your renormalization reasoning, I understand everything in it, but to me there is no bare constants like $g_0$. There is a physical constant $g$ and bad perturbative corrections to it. They are bad, in my opinion, due to bad physics and mathematics introduced by us into good physical equations at a certain stage of theory development. Pretending that the constant is not physical, i.e., the initial equation is wrong namely in $g_0$, but perturbative corrections (or "interactions of wrong particles") are correct, and all this describes the reality is not convincing to me, to say the least. In a correctly formulated (even though incomplete) theory there is no sensitivity to $\Lambda$, nor unnecessary corrections to the physical constants. This is how I see this resummation and renormalization business.

''they can be summed up in a finite formula''? Your resummed formula is still an infinite series, but it has better approximation properties - this is precisely the same as happens when you resum the series arising in quantum field theory. 

For the case of QFT, note that unknown short-distance physics means that QED is valid only with some large but finite cutoff $\Lambda$. For each finite value of $\Lambda$, $x:=g_0$ exists and is very large, so naive perturbation theory yields a useless series. Note that every coefficient is finite; just $x$ is very large, as in your toy example above. Upon resummation, one gets a much better convergent power series in the renormalized $g:=Y(x)$, which is well-behaved for large $x$, just as in your toy example above. Hence numerical convergence is obtained with very few terms, both in your your toy example and in QED. Moreover, in the renormalized formula, the precise value of $\Lambda$ does not matter at all, as one can take the limit $\Lambda\to\infty$ in the final series without causing numerical problems. Thus there is no sensitivity with respect to $\Lambda$. The physical constant is the renormalized $g$, not $g_0$. But $g_0$ is in no sense wrong because the two sums are completely equivalent on the formal level (and would be on a nonperturbative level if one had a way to access it). It is just a numerically poor and hence irrelevant way of parameterizing the problem. Just like using the cotangent to parameterize a tiny angle.

To improve the analogy, generalize your example a little and consider $E(x,\Lambda):=\sum_1^n e_n(\Lambda)x^n$, where $e_n(\Lambda)$ is some expression that alternates and grows like $n!$ but has a parameter in it.. You can still apply your resummation principle, but for optimal quality, you must choose your function $Y$ to depend on $\Lambda$ if you want to have a resummed series that works for all large $\Lambda$ and not just for one. This means that the renormalized argument $g=Y(x,\Lambda)$ becomes cutoff-dependent. 

Discussing "unknown short-distance physics" is off topic here, but it is interesting and I propose to move our comments about renormalization from here to Chat.

I deleted the reference to unknown short-distance physics. Everything else is on topic ("asymptotic series and resummation"), as it is just  a variation of your proposed (without calling it so) renormalization technique, and shows how it extends from your toy model to other, more realistic physics. 

Still, what concerns renormalization analogy, I would like to discuss it elsewhere. I have something to say, but it is not about asymptotic series.

Well, then start something new in chat, and refer to here for context.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOverfl$\varnothing$w
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...