Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Analyticity and Causality in Relativity

+ 5 like - 0 dislike
2762 views

A few weeks ago at a conference a speaker I was listening to made a comment to the effect that a function (let's say scalar) cannot be analytic because otherwise it would violate causality. He didn't make this precise as it was a side comment, and although it intrigued me (I had never heard that said before) I didn't give it much thought until this morning.

Now that I think about it, it actually seems quite obvious. Just look in $1+1$ Minkowski space: suppose $f$ is analytic around some point $(t_0, x_0)$, then I can find some ball about this point such that at another point $(t,x)$ in the ball but outside the light cone of $(t_0,x_0)$ we have that $f(t,x)$ is completely determined by the value of the function and its derivatives at $(t_0,x_0)$. This seems to be against the spirit of causality.

If the above is correct, does anyone know when this was first discussed? I imagine that it would have been quite a long time ago. It's interesting because until this conference I had never heard it said before. Perhaps it is deemed to be uninteresting/obvious?

This post has been migrated from (A51.SE)
asked Mar 3, 2012 in Theoretical Physics by Kyle (335 points) [ no revision ]
retagged Mar 18, 2014 by dimension10
Are you sure that the speaker talked about the analycity of the fields themselves? I have only heard of analycity in the context of correlation functions. There, we have "analytic in the upper half-plane = causal".

This post has been migrated from (A51.SE)
I doubt an answer may be short. From experience of discussion about that problem (also confirmed by answers and comments here) I learn, that even statement of the problem is not very simple. One my colleague even had idea to use it as PhD theme ...

This post has been migrated from (A51.SE)

3 Answers

+ 3 like - 0 dislike

Not really, if you take the point of view that causality is about whether a "signal" can be sent faster than the speed of light.

Recall that whether a function is real analytic depends locally on whether there is a small neighborhood in which the Taylor series of a function converges to the actual function. So one may argue that by postulating a scalar field is real analytic, you are already postulating a fact that cannot be causally derived. The fact that you are using acausal knowledge to derive more acausal knowledge should not be viewed as contradicting causality.

To put it another way: real analyticity is also causally propagated: if you are given the scalar field (linear wave) equation and data that is real analytic in a space-time region, then the most you can say is that the solution is real analytic inside the domain of dependence. Outside the domain of dependence there is no guarantee that the solution will actually be real analytic.

Now, suppose that the universe conspires so that only real analytic functions exist as solutions to the scalar field equations. I claim that you still don't have violation of causality: the main problem being that there does not exist compactly supported real analytic functions. That precludes you from sending any signals! The domain of dependence theorem will still be true for real-analytic solutions: if two fields agree on a space-time region, they agree on the domain of dependence of the region. The problem is not that signals can be sent faster than the speed of light: the problem is that for real-analytic functions, the domain of dependence theorem is completely trivial: if two real analytic scalar fields agree completely on an non-empty, open, space-time region, the two scalar fields must agree everywhere.

This post has been migrated from (A51.SE)
answered Mar 6, 2012 by Willie Wong (580 points) [ no revision ]
yes I think that this is along the lines of what I have concluded as well... I think that a refined interpretation of what the speaker was saying is that causality is violated if you demand that every solution is always analytic. In other words (as you say in your answer) maybe it just turns out that a solution is analytic, however if you demand that it remains analytic under some local perturbation (i.e. you change the solution a little at some point) then you will violate causality...

This post has been migrated from (A51.SE)
Let me just reiterate: "if you demand that it remains analytic under some local perturbation" is impossible: the only local (in the sense of compactly supported) perturbation that we preserve real analyticity is the "0" perturbation, i.e. no perturbation at all.

This post has been migrated from (A51.SE)
+ 1 like - 0 dislike

Analytic functions are functions which are locally given by a convergent power series.

Analyticity of a function does not does not imply that by knowing values of all derivatives one can determine value of the function in other point.

In particular, for any values of $y_0$ and $y_1$ one can construct such analytic function that $y_0=f(x_0,t_0)$ and $y_1=f(x_1,t_1)$.

Consequently, the argument that analyticity would break causality is not valid.

This post has been migrated from (A51.SE)
answered Mar 4, 2012 by Piotr Migdal (1,260 points) [ no revision ]
I guess it depends what you mean by analytic. For a real-analytic function knowing the values of all the derivatives at one point does not imply that you can find the function at a different point. The classical example is $\exp(-\tfrac 1 {x^2})$ around $x=0$. But for a complex analytic function that is true. And it is also true in multiple variables, on domains called convergence polydiscs.

This post has been migrated from (A51.SE)
I meant real-analytic. Being a complex analytic function (aka holomorphic function) is a much stronger property.

This post has been migrated from (A51.SE)
yes they are locally given by a convergent power series, meaning the taylor series must converge to the function in some open set of non-zero radius. For every point in the set I can find an $\epsilon$ ball around that point such that the value of the function in the ball is given by an infinite polynomial whose coefficients are completely determined by the value of the function and its derivatives at the center of the ball. Now the intersection of the $\epsilon$ ball with the outside of the lightcone of the point will be non-empty. Have I made an error in the argument?

This post has been migrated from (A51.SE)
@Sidious Lord: just to clarify $\exp(-1/x^2)$ is smooth (i.e. infinitely differentiable) at $x =0$ but not analytic at $x=0$. Bases on your language I think this is what you meant.

This post has been migrated from (A51.SE)
@Kyle: Yes. I wasn't clear, but what I meant is that there is an ambiguity in extending a real analytic function beyond the disk of convergence. For example, I can add to a real analytic function on an interval $(-a,a)$ a function $f(x)$, such that $f(x)=0$ for $x \in (-a,a)$ while $f(x) = \exp(-\frac 1{(x-a)^2})$ for $x \notin (-a,a)$. The function $f$ is $C^\infty$ and it has zero derivatives in zero. Sorry for not being more clear before.

This post has been migrated from (A51.SE)
It doesn't matter, because the disk of convergence contains already points which are spacelike separated.

This post has been migrated from (A51.SE)
@CristiStoica: How do you define the disk of convergence in Minkowski signature? What metric do you use?

This post has been migrated from (A51.SE)
depends on what topology you choose... I believe that the most people use the usual topology on $\mathbb{R}^n$.

This post has been migrated from (A51.SE)
@Sidious Lord: The function $e^{-\frac 1 {x^2}}$ is defined in a coordinate system, say $(x,y,z,t)$. When calculate the convergence, you work in that coordinate system, which has the topology of $\mathbb R^4$. One can, if we want, define the topology by a positive metric, but it precedes the metric, and is independent on the particular positive metric you used to define it. But in no case one uses the Lorentz metric to define the topology, because one gets a non-separable topological space (lightlike separated points will be undistinguishable and the Minkowski spacetime will be a mess).

This post has been migrated from (A51.SE)
+ 0 like - 0 dislike

There are some comments I'd like to make about this, and I'll collect them in an answer, although I don' know the required reference.


I remember (it's a long time) reading this in a University textbook on Partial Differential Equations of Prof. V. Iftimie, written in Romanian, and published probably before 1990. I don't have the book, so I can't if there were any references cited to support that point.


Someone who met Feynman told me that once, Feynman asked a mathematical physicist, in relation to his new book on calculus, what does he mean by function which is derivable only twice. The author of the book gave Feynman as example a function which was defined piece-wisely by two analytic functions, so that the function is continuous at $0$, but only the first two derivatives exist there. Feynman replied "that's not a function!". My guess is that Feynman considered analytic functions to be more "real", from physical viewpoint, than the artificial construction which was given as an example.


"a function (let's say scalar) cannot be analytic because otherwise it would violate causality"

Since we can define analytic functions on any differentiable manifold, nothing can stop us to define as many analytic functions as we want on the Minkowski spacetime. That's why I think that the question is about analytic physical fields.

Physical fields are supposed to be solutions of PDE. The PDE can have non-analytic solutions (provided that the initial conditions are not given by analytic functions), which are weak or generalized solutions (for example distributions). It would be interesting if there are examples of natural solutions to the PDE in physics, which are that wild, that they would not be analytic at least when restricted to some open set. My guess is that for any such field there is an open set on which its restriction is analytic. And the problem you raised will affect them too, because one can find in any open set two points which are separated by a spacelike interval.


Even if the physical fields in the universe are analytic, I think that we cannot actually use an analytic function in practice to send messages violating causality, because we don't have the possibility to control its value and the values of all its partial derivatives at $(t_0,x_0)$ within an approximation to predict what happens at $(t,x)$. (If the function is a quantum field we can't know the value and the derivatives in principle.)

OK, we don't need to control everything in detail. It is enough to establish the convention that if we make the function to have values in an interval or not, that's a binary digit, so that we send binary signals. The thing is that we can't even do this, because there are always two analytic functions which respects the constraints we impose to send the message, and one is positive and the other negative at the destination point.

What a non-analytic function can do, an analytic function can do too, in what precision we want. So we can't distinguish them by experiment, even if the experiment involves signals violating relativistic causality.

This post has been migrated from (A51.SE)
answered Mar 5, 2012 by Cristi Stoica (275 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysic$\varnothing$Overflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...