Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

206 submissions , 164 unreviewed
5,103 questions , 2,249 unanswered
5,355 answers , 22,800 comments
1,470 users with positive rep
820 active unimported users
More ...

  Reversing gravitational decoherence

+ 7 like - 0 dislike
11659 views

[Update: Thanks, everyone, for the wonderful replies! I learned something extremely interesting and relevant (namely, the basic way decoherence works in QFT), even though it wasn't what I thought I wanted to know when I asked the question. Partly inspired by wolfgang's answer below, I just asked a new question about Gambini et al.'s "Montevideo interpretation," which (if it worked as claimed) would provide a completely different sort of "gravitational decoherence."]

This question is about very speculative technology, but it seems well-defined, and it's hard to imagine that physics.SE folks would have nothing of interest to say about it.

For what follows, I'll assume that whatever the right quantum theory of gravity is, it's perfectly unitary, so that there's no problem at all creating superpositions over different configurations of the gravitational metric. I'll also assume that we live in de Sitter space.

Suppose someone creates a superposition of the form

(1) $\frac{\left|L\right\rangle+\left|R\right\rangle}{\sqrt{2}},$

where |L> represents a large mass on the left side of a box, and |R> represents that same mass on the right side of the box. And suppose this mass is large enough that the |L> and |R> states couple "detectably differently" to the gravitational field (but on the other hand, that all possible sources of decoherence other than gravity have been removed). Then by our assumptions, we ought to get gravity-induced decoherence. That is, the |L> state will get entangled with one "sphere of gravitational influence" spreading outwards from the box at the speed of light, and the |R> state will get entangled with a different such sphere, with the result being that someone who measures only the box will see just the mixed state

(2) $\frac{\left|L\right\rangle\left\langle L\right|+\left|R\right\rangle\left\langle R\right|}{2}.$

My question is now the following:

Is there any conceivable technology, consistent with known physics (and with our assumption of a dS space), that could reverse the decoherence and return the mixed state (2) to the pure state (1)? If so, how might it work? For example: if we'd had sufficient foresight, could we have surrounded the solar system with "gravity mirrors," which would reflect the outgoing spheres of gravitational influence back to the box from which they'd originated? Are exotic physical assumptions (like negative-energy matter) needed to make such mirrors work?

The motivation, of course, is that if there's no such technology, then at least in dS space, we'd seem to have a phenomenon that we could justifiably call "true, in-principle irreversible decoherence," without having to postulate any Penrose-like "objective reduction" process, or indeed any new physics whatsoever. (And yes, I'm well aware that the AdS/CFT correspondence strongly suggests that this phenomenon, if it existed, would be specific to dS space and wouldn't work in AdS.)

[Note: I was surprised that I couldn't find anyone asking this before, since whatever the answer, it must have occurred to lots of people! Vaguely-related questions: Is decoherence even possible in anti de Sitter space?, Do black holes play a role in quantum decoherence?]

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
asked Aug 27, 2012 in Theoretical Physics by ScottAaronson (795 points) [ no revision ]
Most voted comments show all comments
Sigh. No. This is off-topic here, but it's really a pity that quantum mechanics tends to be taught in a way that obscures this point.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Matt Reece
Actually, lurscher, I'm confused, since I found this answer where you yourself explain why measurement need not be nonunitary. I don't think I understand what you're objecting to. physics.stackexchange.com/questions/10068/…

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Matt Reece
@MattReece, short version; Observers see interactions of external systems to be always unitary. Observers see interaction of systems with themselves to always be non-unitary. Decoherence only role is to explain the attenuation of interference terms and converge the distribution to a classical one, but the above is valid without doing any considerations of interference terms

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user lurscher
@ScottAaronson: I feel like your question gestures in several different directions, only some of which are being answered. dS caught my eye because I wondered if you had thermal effects in mind, but it also has the property that you could imagine that the photons (or whatever) you'd need to bring together to re-cohere a state could get separated by the exponential expansion of space, making them causally disconnected, and thus preventing you from getting back the original state. That might be another kind of irreversibility to think about. But none of these issues sounds fundamental to me...

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Matt Reece
@MattReece: Yes, I felt exactly the same way!! Ron sort of turned this into a discussion about decoherence in QFT, and I was happy to go along because I found that interesting, but what I really wanted to ask about was the broader issue of decoherence that's irreversible in principle, and the possible sources of it in known physics. And yes, it did seem to me like dS could "help" by pushing stuff outside the cosmic horizon. I feel that, before rushing to decide the philosophical question of whether this or that decoherence source is "fundamental," we should first figure out if they exist!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Most recent comments show all comments
@JimGraber: Then the obvious question becomes, when do the von Neumann projections happen and what causes them? The whole point of this question was to explore how far decoherence can really be used to sidestep the famous difficulties associated with the measurement problem. But for this question, there's really no need to get into the measurement problem itself.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@ ScottAaronson Understood that this is a side issue here and therefore reintroduced as a separate question.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Jim Graber

8 Answers

+ 9 like - 0 dislike

If we do an interference experiment with a (charged) particle coupled to the electromagnetic field or a massive particle coupled to the gravitational field, we can see interference if no information gets stored in the environment about which path the particle followed (or at least, if the states of the environment corresponding to the two paths through the interferometer have a large overlap --- if the overlap is not 1 the visibility of the interference fringes is reduced).

The particle is "dressed" by its electromagnetic or gravitational field, but that is not necessarily enough to leave a permanent record behind. For an electron, if it emits no photon during the experiment, the electromagnetic field stays in the vacuum state, and records no "which-way" information. So two possible paths followed by the electron can interfere.

But if a single photon gets emitted, and the state of the photon allows us to identify the path taken with high success probability, then there is no interference.

What actually happens in an experiment with electrons is kind of interesting. Since photons are massless they are easy to excite if they have long wavelength and hence low energy. Whenever an electron gets accelerated many "soft" (i.e., long wavelength) photons get emitted. But if the acceleration is weak, the photons have such long wavelength that they provide little information concerning which path, and interference is possible.

It is the same with gravitons. Except the probability of emitting a "hard" graviton (with short enough wavelength to distinguish the paths) is far, far smaller than for photons, and therefore gravitational decoherence is extremely weak.

These soft photons (or gravitons) can be well described using classical electromagnetic (or gravitional) theory. This helps one to appreciate how the intuitive picture --- the motion of the electron through the interferometer should perturb the electric field at long range --- is reconciled with the survival of interference. Yes, it's true that the electric field is affected by the electron's (noninertial) motion, but the very long wavelength radiation detected far away looks essentially the same for either path followed by the electron; by detecting this radiation we can distinguish the paths only with very poor resolution, i.e. hardly at all.

In practice, loss of visibility in decoherence experiments usually occurs due to more mundane processes that cause "which-way" information to be recorded (e.g. the electron gets scattering by a stray atom, dust grain, or photon). Decoherence due to entanglement of the particle with its field (i.e. the emission of photons or gravitons that are not very soft) is always present at some level, but typically it is a small effect.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Preskill
answered Aug 27, 2012 by John Preskill (160 points) [ no revision ]
pretty concise answer, +1

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user lurscher
-1: This is complete nonsense. There is NO DECOHERENCE from static fields. This is an obvious fact.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
I'm sorry I was unclear. I meant to consider an interference experiment in which a particle can travel along either path 1 or path 2 between spacetime points A and B, where neither path is a geodesic. Then the particle emits radiation, but my point is that if the radiation is very soft we won't be able to learn whether the particle followed path 1 or path 2 by measuring the radiation field. Therefore, the two paths can interfere.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Preskill
+1 from me too.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Joe Fitzsimons
Thanks so much, John!! In case it's helpful to others, here's my personal doofus model for what you're saying: it would be like you had a qubit in state a|0>+b|1> which then became entangled with some other degree of freedom (in this case, a long-wavelength photon). But because of the photon's limited ability to resolve the 0 and 1 states, you just get "soft" decoherence, as would happen if you mapped the state a|0>+b|1> to a|0>(|0>+eps|1>)+b|1>(|0>+eps|2>) for some small eps>0.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Not only does this clear up my confusion about how decoherence in QFT could work, but in retrospect, I see why this is really how it had to work -- the alternatives would either leak out the "which-path" information immediately or else never leak it out at all (without violating causality). Just one remaining sanity check: I suppose this weak coupling is the reason why Penrose has to postulate strange new physics for his gravitational collapse, rather than just saying that a massive object's which-path information should "leak irreversibly" into the gravitational field?

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Yes, I think that is correct. Penrose's proposal is very speculative. His estimate of the decoherence rate is far higher than standard gravitational theory would indicate.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Preskill
When account for all of the known physical processes by which particles are "dressed" by zero-mass bosonic vacuum excitations (whether electromagnetic or gravitational) then it becomes less evident that we should be entirely confident that high-order quantum decoherence is reducible to scalably-low levels. In particular, collective ("superradiant") dynamical decoherence is ubiquitous; e.g., the teaching of the Casimir effect (Landau and Lifshitz, Theory of Continuous Media) is that no mirror is decoherence-free. Some physical idealizations (e.g., perfect mirrors) are just plain wrong!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
@John Preskill , with a view toward clarifying the various subtleties and imprecisions associated to notions like "dressed particles" and "mirrors", the literature survey below has been augmented to show how modern ideas from quantum information theory serve to naturalize and universalize the 20th century understanding of global conservation and local transport processes that is due to Dirac, Onsager, Casimir, Callen, Landau, Green, Kubo, etc. Even today, there is plenty that we do not understand about "dressed particles" and "mirrors."

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
@JohnPreskill The equivalence principle should be a strong guide as it relates to gravitational decoherence. If we understand tidal forces correctly, gravity between two interacting objects can be reinterpreted as two non interacting objects interacting with a third object. Essentially there is always a frame of reference where gravity can be neglected. This could be interpreted as meaning there is always some transformation that can in principle remove an gravitationally induced decoherence

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Hal Swyers
+ 6 like - 0 dislike

I'm probably straying into dangerour territory here, but let me venture an answer. Doing so is probably just asking to be shot down by John Preskill, or some other such expert, but let me stick my neck out.

Despite Ron's comments, gravity and EM are different in this context, in the sense that you can't flip the sign of the gravitational interaction the way you can with EM. On a deeper level, they should behave in a similar way, though: The only way to get decoherence (without assuming additional baggage from some particular interpretation of QM) is to create a non-local state, such that the reduced density matrix for a local observation is mixed. This is essentially at the heart of things like the Unruh effect, where an accelerating observer observes a mixed state.

The difficulty about talking about unitary operations is that they will be that this means taking a spacelike slice of the state of the universe, and this is going to introduce all sorts of observer effects. In particular the main problem is going to be horizons, since information will have leaked beyond the event horizon for some observers. So for some observers there will be no unitary which reverses the unitarity while for others there will be.

This isn't that weird. Even in Minkowski space, when we lose a photon, we can never hope to catch it again (ignoring the slight slowing induced by the earths atmosphere, and the even slighter effects in interplanetary and interstellar space). So there is no unitary we can ever perform which could reverse this.

On the other hand, we can make a transformation of frames to that of an observer who perceives the process as unitary, and the same can be the case in more general space times (although I am not convinced this is always true). For example the decoherence induced in the frame of a continuously accelerating observer disappears if the observer stops accelerating.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Joe Fitzsimons
answered Aug 27, 2012 by Joe Fitzsimons (3,575 points) [ no revision ]
Most voted comments show all comments
@RonMaimon: Are you simply saying that, if you leave the monopole in superposition for a long enough time, it will emit enough long-wavelength particles that eventually the off-diagonal density matrix elements will go to zero? If so, then that's perfectly consistent with the picture that John explained and that (I think) I now understand. No need for the Tourette's-like "I am not wrong" repetition.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: I am not wrong, you keep saying that I am wrong, and I am not, and everyone else is not exactly wrong, but missing the point. I am not talking about measuring emissions from the monopole, there are no emissions from the static superpositions (imagine a monopole in far-separated double well). When you turn on the SQUID, it is the SQUID that measures the field state, it is the SQUID that entangles with the photon field, and it is the SQUID's emissions that decohere the monopole, not the monopole's emissions. The monopole just entangles with the SQUID through the field.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
@RonMaimon: The reason I keep searching for a better interpretation of what you're saying is that the straightforward reading makes no sense. It can't possibly be the case that the off-diagonal entries of the monopole's reduced density matrix are large, then you turn on a SQUID miles away, then suddenly the off-diagonal entries go to 0 because of the fact of the SQUID having measured (without conditioning on the outcome). Not one of your many comments has yet addressed this simple, irrefutable point about locality. (Incidentally, I don't see daylight between John and Joe's position and mine.)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: In case of discrepancy I defer to John!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Joe Fitzsimons
(To put it as simply as I possibly can: by the time a faraway SQUID gets turned on and makes an informative measurement, some decoherence horse must already be out of the barn. Even in QM, an observation of the horse miles away can't cause it to have left the barn retrocausally.)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Most recent comments show all comments
@RonMaimon: Now we're just discussing QM, nothing specific about fields, so I feel very confident in saying Joe is correct. Let's go back to my model situation (from a comment on John's post), where a qubit a|0>+b|1> gets mapped to a|0>(|0>+eps|1>)+b|1>(|0>+eps|2>) for some small eps>0. John (via email) and Joe both said this is a fine model for what happens when a long-wavelength photon is emitted, and you haven't disputed that. Now, notice that the reduced density matrix of the first qubit changes, the moment it gets entangled with the second system. No need to "actually measure" anything!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@JoeFitzsimons: I know how entanglement works, and that it is the same as measurement. Measurement for me is shorthand for "entanglement with irreversible system". Please stop with the red herrings. I am not wrong about anything, and you are saying irrelevant things to distract from this. Take a magnetic monopole, and put it half on one side of a quantum dot and half on the other. If you bring a real SQUID close, and you measure the field accurately, you will detect which side of the box the monopole is in, and you will decohere the superposition. That doesn't violate anything.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
+ 4 like - 0 dislike

I think you're getting a bit ahead of yourself. This seems to be a variation of the "Schrodinger's lump" thought experiment discussed by Penrose[1] as a motivation for his own theory of gravitational objective collapse. I think he makes an important point which is relevant to your example also, namely, the state that you write down in your Eq.(1) is not well-defined. Before we can ask questions about reversibility and dynamics in such a thought experiment, we need to explain what be mean by `a superposition of space-times'.

In particular, superpositions of matter at different positions in quantum mechanics is only understood with reference to some background metric. If each of the terms in your superposition, $|L\rangle$ and $|R\rangle$, themselves correspond to different metrics, then with respect to whose time co-ordinate do they evolve (or remain static, as the case may be)? With respect to what background structure do we compare the two different metrics, each of which corresponds to the different positions of the mass? I challenge you to re-write the state of Eqn.(1) making the dependence on space-time co-ordinates explicit.

I share your surprise that relatively little attention seems to have been given to such thought experiments. It seems to me that coming up with toy models to give consistent answers to questions such as this is a logical starting point in searching for a deeper theory.

[1]: Gen. Rel. Grav. 28,5, 581-600 (1996)

EDIT: (in light of Scott's comment below)

Okay, let us see how far we can get without worrying about the finer details. We set up a gravitational decoherence experiment a la Preskill, with the decoherence occuring on detection of a "hard" graviton by a detector. Since our unspecified theory of QG is unitary, there ought to be some way in principle for us to reverse the decoherence. A necessary condition is that the system + detector (S+D) must be enclosed within a boundary such that no which-path information can leak outside the boundary. We need to effectively isolate the system and detector from the environment.

While it is possible to shield the S+D from electromagetic leakage using mirrors, it is not obvious that we can stop the gravitons from leaking out. Trivially, we could do this by taking S+D to include the entire universe, but the lack of any external observer is problematic for the operational meaning of the experiment. Instead, let us simply assume that a gravitational mirror-box can be constructed. Would this solve our problem?

It seems that it would. The combined system S+D would be effectively isolated, hence its evolution would be, by assumption, unitary and thus reversible. In particular, it would return to its initial state after the Poincare recurrence time, leaving the detector disentangled from the system once more.

The question, therefore, is whether a "gravitational shield" can be constructed in principle. At a glance, it appears not, since the equations of GR do not permit us to exclude any part of the energy-momentum tensor when using it to determine the (global) metric - at least as far as I know.

Note that this would not be an argument against "truly irreversible" gravitational decoherence, since we have excluded that possibility by the assumption of unitarity.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Jacques Pienaar
answered Aug 28, 2012 by Jacques Pienaar (40 points) [ no revision ]
Thanks, Jacques! I agree that I was "getting ahead of myself," in the sense that, in retrospect, there were strictly more basic issues that I was already confused about. I also agree that my state (1) might not be well-defined in QG---yes, I've read Penrose about this, and his remarks were very much part of the motivation for this question! Thus, when I wrote equation (1), I really meant the following: "suppose we performed an experiment involving a beamsplitter, a large mass, etc., that, within the framework of conventional QM, would be expected to lead to the state (1)..."

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
+ 4 like - 0 dislike

Gambini and Pullin have developed what they call the "Montevideo interpretation" of quantum theory in a series of papers. See e.g. arxiv.org/abs/0903.2438 While their paper(s) may not answer the exact question Scott asked, they do adress the underlying question how gravitation affects decoherence (and thus the interpretation of quantum theory).

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user wolfgang
answered Aug 28, 2012 by wolfgang (60 points) [ no revision ]
Thanks, Wolfgang! Now that I look, I actually saw that paper a while ago, and it might have subconsciously influenced my asking of this very physics.SE question... :-)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
+ 3 like - 0 dislike

There is no decoherence from the near-field static gravitational field by itself, the static field is just superposed coherently along with the box mass distribution. The decoherence only comes when you have some quantum particle interacting with the gravitational field and deflected by a different amount for the two different fields, so that that different position of the mass leads to a different deflection for the particle. Then the two deflection states are entangled with the two different position states, and you lose coherence between the two.

The same thing happens when you have a particle with an electrostatic field. The near field is superposed along with the particle when you superpose two position states, so you get a superposition of fields with two different centers. This superposition is not decohered, even though the field potentially extends arbitrarily far out. It becomes decohered when you shoot a particle through the electrostatic field which deflects by a different amount depending on which field is which, then the position superposition turns into a deflection superposition, and the deflection reduces the wavefunction.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
answered Aug 27, 2012 by Ron Maimon (7,740 points) [ no revision ]
Thanks! But if the gravitational field can't decohere anything "on its own", why do people go on about it containing its own vast number of degrees of freedom in QG, which need to be counted to get anywhere close to saturating the holographic bound?

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: Are you talking about the gravitational field, or the entropy of the cosmological horizon? I don't understand the comment. A static field can't decohere anything, it is just sourced by the superposed thing so it ends up in a superposition. The degrees of freedom of a black hole or cosmological horizon are irrelevant.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
Look, IANAP, but there's something strange about the view that fields are "just" sourced by particles and can never decohere anything on their own. For example, suppose the field disturbance is already out to Alpha Centauri, and only then do I move the objects back to their original state. Without violating causality, how do the objects "know" whether there were any particles in Alpha Centauri whose interaction with the field should have decohered them? Doesn't it at least take time to "uncompute" the field propagation? (And aren't the fields the basic DoFs in QFT anyway?)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
I freely confess that the alternative, that fields can decohere stuff all by themselves, doesn't make sense either, since it would suggest that interference experiments with (say) electrons ought to be impossible: after two electron states have generated different EM fields, no matter how much time elapses, the "correction" to the field from bringing the electron states back together can never propagate fast enough to get rid of the outermost shell of field disturbance, and thereby reverse the decoherence. Hence my confusion about the entire subject of decoherence and fields.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
@ScottAaronson: Moving the particle produces gravitons that decohere the particle's position. You don't need to uncompute anything.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Ron Maimon
+ 3 like - 0 dislike

Yes, you can get gravity induced decoherence for a massive body provided it takes at least two different trajectories, and then both path come back again to the same location (otherwise, how can we tell interference has vanished?). But the paths have to differ for at least as long as the decoherence time, which can be very very long for bodies with low mass. In practice, decoherence by other sources will dominate.

The real problem comes when you have massive matter with many microstates. Gravity can decohere maybe the center-of-mass position and velocity, and maybe some coarse grained energy-momentum distribution, but there are many finer details which aren't decohered by gravity, but are still decohered by other more mundane mechanisms, like collisions with environmental photons and molecules.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Zeb
answered Aug 28, 2012 by Zeb (30 points) [ no revision ]
+ 3 like - 0 dislike

Here is an extended answer that concludes

Summary   On entropic grounds, gravitational radiative decoherence is similarly irreversible to all other forms of radiative decoherence, and in consequence, Nature's quantum state-spaces are effectively low-dimension and non-flat.


Update B  For further discussion and references, see this answer to the CSTheory.StackExchange question "Physical realization of nonlinear operators for quantum computers."

Update A  This augmented survey/answer provides an entropically naturalized and geometrically universalized survey of the physical ideas that are discussed by Jan Dereziski, Wojciech De Roeck, and Christian Maes in their article Fluctuations of quantum currents and unravelings of master equations (arXiv:cond-mat/0703594v2).  Especially commended is their article's "Section 4: Quantum Trajectories" and the extensive bibliography they provide.

By deliberate intent, this survey/answer relates also to the lively (and ongoing) public debate that is hosted on Gödel's Lost Letter and P=NP, between Aram Harrow and Gil Kalai, regarding the feasiblity (or not) of scalable quantum computing.


Naturalized survey of thermodynamics

We begin with a review, encompassing both classical and quantum thermodynamical principles, following the exposition of Zia, Redish, and McKay's highly recommended Making sense of the Legendre transform (AJP, 2009). The fundamental thermodynamical relations are specified as

$$ \Omega(E)=e^{\mathcal{S}(E)}\,, \quad\qquad Z(\beta)=e^{-\mathcal{A}(\beta)}\,,\\[2ex] \frac{\partial\,\mathcal{S}(E)}{\partial\,E} = \beta\,, \quad\qquad \frac{\partial\,\mathcal{A}(\beta)}{\partial\,\beta}= E\,,\\[3ex] \mathcal{S}(E) + \mathcal{A}(\beta) = \beta E\,. $$

In these relations the two conjugate thermodynamic variables

$$ E := \text{total energy}\,, \quad\qquad \beta := \text{inverse temperature}\,, $$

appear as arguments of four fundamental thermodynamic functions

$$ \mathcal{S} := \text{entropy function}\,, \quad\qquad \mathcal{A} := \text{free energy function}\,, \\ {Z} := \text{partition function}\,, \quad\qquad {\Omega} := \text{volume function}\,. $$

Any one of the four thermodynamic potentials $(\mathcal{S},\mathcal{A},Z,\Omega)$ determines the other three via elementary logarithms, exponentials, Laplace Transforms, and Legendre transforms, and moreover, any of the four potentials can be regarded as a function of either of the two conjugate variables. 

Aside  The preceding relations assume that only one quantity is globally conserved and locally transported, namely the energy $E$.  When more than one quantity is conserved and transported — charge, mass, chemical species, and magnetic moments are typical examples — then the above relations generalize naturally to a vector space of conserved quantities and a dual vector space of thermodynamically conjugate potentials. None of the following arguments are fundamentally altered by this multivariate thermodynamical extension.

Naturalized survey of Hamiltonian dynamics

To make progress toward computing concrete thermodynamic potential functions, we must specify a Hamiltonian dynamical system.  In the notation of John Lee's Introduction to Smooth Manifolds we specify the Hamiltonian triad $(\mathcal{M},H,\omega)$ in which

$$ \begin{array}{rl} \mathcal{M}\ \ :=&\text{state-space manifold}\,,\\ H\,\colon \mathcal{M}\to\mathbb{R}\ \ :=&\text{Hamiltonian function on $\mathcal{M}$}\,,\\ \omega\,\llcorner\,\colon T\mathcal{M}\to T^*\mathcal{M}\ \ :=& \text{symplectic structure on $\mathcal{M}$}\,. \end{array}\hspace{1em} $$

The dynamical flow generator $X\colon \mathcal{M}\to T\mathcal{M}$ is given by Hamilton's equation

$$\omega\,\llcorner\,X = dH\,.$$

From the standard (and geometrically natural) ergodic hypothesis — that thermodynamic ensembles of Hamiltonian trajectories fill state-spaces uniformly, and that time averages of individual trajectories equal ensemble averages at fixed times — we have ${\Omega}$ given naturally as a level set volume

$$ \text{(1a)}\qquad\qquad\quad\quad \Omega(E) = \int_\mathcal{M} \star\,\delta\big(E-H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad\qquad $$

where "$\star$" is the Hodge star operator that is associated to the natural volume form $V$ on $\mathcal{M}$ that is given as the maximal exterior power $V=\wedge^{(\text{dim}\,\mathcal{M})/2}(\omega)$.  This expression for $\Omega(E)$ is the geometrically naturalized presentation of Zia, Redish, and McKay's equation (20).

Taking a Laplace transform of (1a) we obtain an equivalent (and classically familiar) expression for the partition function $Z(\beta)$

$$ \text{(1b)}\qquad\qquad\qquad Z(\beta) = \int_\mathcal{M} \star\exp\big({-}\beta\,H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad $$

The preceding applies to Hamiltonian systems in general and thus quantum dynamical systems in particular.  Yet in quantum textbooks the volume/partition functions (1ab) do not commonly appear, for two reasons.  The first reason is that John von Neumann derived in 1930 — before the ideas of geometric dynamics were broadly extant — a purely algebraic partition function that, on flat state-spaces, is easier to evaluate than the geometrically natural (1a) or (1b). Von Neumann's partition function is $$ \text{(2)}\qquad Z(\beta) = \text{trace}\,\exp{-\beta\,\mathsf{H_{op}}} \quad\text{where}\quad [\mathsf{H_{op}}]_{\alpha\gamma} = \partial_{\,\bar\psi_\alpha}\partial_{\,\psi_\gamma} H(\mathcal{M})\,. \qquad\qquad $$ Here the $\boldsymbol{\psi}$ are the usual complete set of (complex) orthonormal coordinate functions on the (flat, Kählerian) Hilbert state-space $\mathcal{M}$.  Here $H(\mathcal{M})$ is real and the functional form of $H(\mathcal{M})$ is restricted to be bilinear in $\boldsymbol{\bar\psi},\boldsymbol{\psi}$; therefore the matrix $[\mathsf{H_{op}}]$ is Hermitian and uniform on the state-space manifold $\mathcal{M}$.  We appreciate that $Z(\beta)$ as defined locally in (2) is uniform globally iff $\mathcal{M}$ is geometrically flat; thus von Neumann's partition function does not naturally extend to non-flat complex dynamical manifolds.

We naively expect (or hope) that the geometrically natural thermodynamic volume/partition functions (1ab) are thermodynamically consistent with von Neumann's elegant algebraic partition function (2), yet — surprisingly and dismayingly — they are not. Surprisingly, because it is not immediately evident why the geometric particion function (1b) should differ from von Neumann's partition function (2). Dismayingly, because the volume/partition functions (1ab) pullback naturally to low-dimension non-flat state-spaces that are attractive venues for quantum systems engineering, and yet it is von Neuman's partition function (2) that accords with experiment.

We would like to enjoy the best of both worlds: the geometric naturality of the ergodic expressions (1ab) and the algebraic naturality of von Neumann's entropic expression (2). The objective of restoring and respecting the mutual consistency of (1ab) and (2) leads us to the main point of this answer, which we now present.

The main points:  sustaining thermodynamical consistency

Assertion I  For (linear) quantum dynamics on (flat) Hilbert spaces, the volume function $\Omega(E)$ and partition function $Z(\beta)$ from (1ab) are thermodynamically inconsistent with the partition function $Z(\beta)$ from (2).

Here by "inconsistent" is meant not "subtly inconsistent" but "grossly inconsistent".  As a canonical example, the reader is encourage to compute the heat capacity of an ensemble of weakly interacting qubits by both methods, and to verify that the (1ab) predict a heat capacity for an $n$-qubit system that is superlinear in $n$. To say it another way, for strictly unitary dynamics (1ab) predict heat capacities that are non-intensive.

So the second — and most important — reason that the volume/partition functions (1ab) are not commonly given in quantum mechanical textbooks is that strictly unitary evolution on strictly flat quantum state-spaces yields non-intensive predictions for thermodynamic quantities that experimentally are intensive.

Fortunately, the remedy is simple, and indeed has long been known: retain the geometric thermodynamic functions (1ab) in their natural form, and instead alter the assumption of unitary evolution, in such a fashion as to naturally restore thermodynamic extensivity.

Assertion II  Lindbladian noise of sufficient magnitude to spatially localize thermodynamic potentials, when unraveled as non-Hamiltonian (stochastic) quantum trajectories, restores the thermodynamical consistency of the volume/partition functions $(\Omega(E),Z(\beta))$ from (1ab) with the partition function $Z(\beta)$ from (2).

Verifying Assertion II is readily (but tediously) accomplished by the Onsager-type methods that are disclosed in two much-cited articles: Hendrik Casimir's On Onsager's Principle of Microscopic Reversibility (RMP 1945) and Herbert Callen's The Application of Onsager's Reciprocal Relations to Thermoelectric, Thermomagnetic, and Galvanomagnetic Effects (PR, 1948).  A readable textbook (among many) that covers this material is Charles Kittel's Elementary Statistical Physics (1958).

To help in translating Onsager theory into the natural language of geometric dynamics, a canonical textbook is John Lee's Introduction to Smooth Manifolds (2002), which provides the mathematical toolset to appreciate the research objectives articulated in (for example) Matthias Blau's on-line lecture notes Symplectic Geometry and Geometric Quantization (1992).

Unsurprisingly, in light of modern findings in quantum information theory, the sole modification that naturality and universality require of Onsager's theory is this: the fluctuations that are the basis of Onsager's relations must be derived naturally from unravelled Lindblad processes, by the natural association of each Lindbladian generator to an observation-and-control process.

We note that it is neither mathematically natural, nor computationally unambiguous, nor physically correct, to compute Onsager fluctuations by non-Lindbladian methods. For example, wrong answers are obtained when we specify Onsager fluctuations as operator expectation fluctuations, because this procedure does not account for the localizing effects of Lindbladian dynamics.

Concretely, the fluctuating quantities that enter in the Onsager formulation are given as the data-streams that are naturally associated to Lindbladian observation processes … observation processes that are properly accounted in the overall system dynamics, in accord with the teaching of quantum information theory. Thereby Onsager's classical thermodynamical theory of global conservation and local transport processes straightforwardly naturalizes and universalizes — via the mathematical tool-set that quantum information theory provides — as a dynamical theory of the observation of natural processes.

Physical summary  Consistency of the geometrically natural thermodynamic functions (1ab) with the algebraically natural thermodynamic function (2) is restored because the non-unitary stochastic flow associated to unraveled Lindbladian noise reduces the effective dimensionality of the quantum state-space manifold, and also convolutes the quantum state-space geometry, in such a fashion that as to naturally reconcile geometric descriptions of thermodynamics (1ab) with von Neumann-style algebraic descriptions of thermodynamics (and information theory) on Hilbert state-spaces (2). 

Assertion III  The thermodynamic consistency requires that, first, quantum dynamical flows be non-unitary and that, second, the resulting trajectories be restricted to non-flat state-spaces of polynomial dimensionality.

We thus appreciate the broad principle that quantum physics can make sensible predictions regarding physical quantities that are globally conserved and locally transported only by specifying non-unitary dynamical flows on non-flat quantum quantum spaces. 

Duality of classical physics versus quantum physics  The above teaching regards "classical" and "quantum" as well-posed and mutually consistent limiting cases of a broad class of naturalized and universalized Hamiltonian/Kählerian/Lindbladian dynamical frameworks.  For practical purposes the most interesting dynamical systems are intermediate between fully classical and fully quantum, and the thrust of the preceding analysis is that the thermodynamical properties of these systems are naturally and universally defined, calculable, and observable.

Duality of fundamental physics versus applied physics  The fundamental physics challenge of constructing a thermodynamically and informatically consistent description of non-unitary quantum dynamics on non-flat complex state-spaces — a challenge that is widely appreciated as difficult and perhaps even impossible — is appreciated as dual to the practical engineering challenge of efficiently simulating noisy quantum system dynamics … a challenge that is widely appreciated as feasible.

Remarks upon gravitational decoherence  The above analysis establishes that decoherence associated to gravitational coupling — and more broadly the ubiquity of the superradiant dynamics that is associated to every bosonic field of the vacuum — and further supposing this decoherence to be "irreversible" (in Scott's phrase), would have the following beneficent implications:

  • the naturality and universality of thermodynamics is thereby preserved, and
  • quantum trajectories are effectively restricted to low-dimension non-flat state-spaces, and
  • the efficient numerical simulation of generic quantum systems is thus permitted.

From a fundamental physics point-of-view, the converse hypothesis is attractive:

Kählerian hypothesis  Nature's quantum state-spaces are generically low-dimension and non-flat in consequence of irreversible decoherence mechanisms that are generically associated to bosonic vacuum excitations.

Conclusions

As with the ergodic hypothesis, so with the Kählerian hypothesis, in the sense that regardless of whether the Kählerian hypothesis is fundamentally true or not — and regardless of whether gravitation radiation accounts for it or not — for practical quantum systems engineering purposes experience teaches us that the Kählerian hypothesis is true. 

The teaching that the Kählerian hypothesis is effectively true is good news for a broad class of 21st century enterprises that seek to press against quantum limits to speed, sensitivity, power, computational efficiency, and channel capacity … and it is very good news especially for the young mathematicians, scientists, engineers, and entrepreneurs who hope to participate in creating these enterprises.


Acknowledgements  This answer benefited greatly from enjoyable conversations with Rico Picone, Sol Davis, Doug and Chris Mounce, Joe Garbini, Steve Flammia, and especially Aram Harrow; any remaining errors and infelicities are mine alone. The answer is also very largely informed by the ongoing debate of Aram Harrow with Gil Kalai, regarding the feasibility (or not) of scalable quantum computing, that has been hosted on the web page Gödel's Lost Letter and P=NP, regarding which appreciation and thanks are extended.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
answered Aug 28, 2012 by John Sidles (485 points) [ no revision ]
Sorry John, but I'm stuck on a very basic point. Suppose it were true that one could use thermodynamic arguments to show that radiative decoherence was "irreversible." Then why wouldn't the same arguments work in cases like Zeilinger et al's buckball experiment, where we know that "decoherence" CAN be reversed? You might answer: my arguments only work for systems for which thermodynamics is relevant. But that brings us to the crux: for which systems IS thermodynamics relevant? Thermodynamics is an effective theory, and invoking it here seems to presuppose the answer you want.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
(To illustrate, suppose that Zeilinger et al succeed in recohering the two paths of a buckyball. Then we can conclude, after the fact, that the experiment did NOT increase the buckyball's entropy, so thermodynamics wasn't the right language to describe what was happening. But this seems to reduce your argument to a tautology: decoherence is irreversible whenever it can't be reversed!)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Scott, I think we may even agree, though we prioritize our main points differently. For me, the main point is that any quantum theory that provides thermodynamically consistent descriptions of localized transport of globally conserved quantities must entail non-unitary flow on a non-flat low-dim Kahlerian state-space. Your point too is valid --- even equivalent! -- namely Zeilinger-type buckyball experiments succeed iff transport of the conserved quantity (mass) is not spatially localizable. And this accords with our everyday experience that QM is locally Hilbert, globally not, eh?

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
LOL ... maybe I'd better say too, that I told Aram Harrow yesterday that I'd let these ideas lie fallow for a few days ... on the grounds that some tricky practical considerations regarding the efficient simulation of quantum transport are associated to them! And so, there is a pretty considerable chance that in the next week or so, some of the above points will be reconsidered (by me) and extended or rewritten. Therefore Scott, please consider your question to be answered in the same spirit it was asked. That is why both your question and your comments (above) are greatly appreciated. :)

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
I'm glad you appreciate my comments! But I'm stuck on the fact that your arguments seem to give no concrete guidance about which sorts of systems can be placed in coherent superposition and which ones can't. For example: can a virus be placed in superposition of two position states 1 meter apart? How about a 1kg brick? A human brain? Saying QM is "locally Hilbert, globally not" doesn't help me too much unless you can say where the boundary between "local" and "global" resides, and why no technology will ever be able to cross that boundary!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Scott Aaronson
Scott, your comment is like "Your arguments seem to give no concrete guidance about the magnitude of the radius of the universe." To answer concretely, I'd have to specify (for example) the rank-indexed stratification of the universal state-space. In this regard, the lowest rank that I'm comfortable asserting is ... uhhh ... maybe 137? Seriously, experiments to bound the rank and/or curvature of quantum state-spaces surely will prove comparably challenging to general relativity experiments. Well heck, that's GOOD!

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
Still more concretely, all quantum experiments/observations accomplished to date, and all accurate quantum chemistry/physics simulations (known to me) can be accurately unravelled (AFAICT) on state-spaces whose stratification rank is $r \sim \mathcal{O}(nd(2j+1))$, where $n$ is the number of particles, $d=3$ is the spatial dimension, and $j$ is the spin. Needless to say, there are considerable nuances associated to this poly-dimension state-space scaling law ... presumably Nature "knows" the natural basis for minimal-rank/high-fidelity state-space stratifications, but we have to guess at it.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user John Sidles
+ 1 like - 0 dislike

In order for gravity to decohere a quantum system, that system has to emit at least a graviton. Let's say the graviton is emitted in a certain direction at a certain time, up to the limits of resolution given by the spread in the graviton wavepacket. Now suppose there is another quantum system lying in the same direction which could also have emitted a graviton in the same direction at a time lag later given by the time it takes for light (speed of light = speed of graviton) to travel from the first to the second system. The point is, detecting a graviton moving in that direction at some time still doesn't enable us to distinguish which of the two quantum systems emitted the graviton. It could have been the first, as matter, i.e. the second system interacts so weakly with gravitons that it's transparent to them. It could also have been the second. The resolution is poor.

In general, the amount of information decohered by outgoing information — which can include gravitons, photons, or more massive matter — only scales as the area of the enclosing boundary, while the number of events inside scales as the volume. This limits the "decoherence resolution" by outgoing signals far away, assuming there is matter distributed all over the interior volume. If there is only one quantum system of size L in the middle surrounded by a vacuum all the way all around it, this ambiguity problem wouldn't exist, but our universe isn't like that, at least, not in FRW models.

As noted by other posters, in order to demonstrate the suppression of interference, some matter has to take a superposition of at least two different paths, but then merge back to the same location after a time period $T$. Any decohering emitted graviton has to have a frequency of at least $1/T$. This means we can disregard soft gravitons with frequencies much less than $1/T$. All the other answers which mention soft gravitons are missing the point.

Also, as noted by others, decoherence by other sources dominate over gravitational decoherence by far because gravity is the weakest force at distance scales relevant to us.

This post imported from StackExchange Physics at 2014-07-24 15:46 (UCT), posted by SE-user Panica
answered Aug 31, 2012 by Panica (10 points) [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\varnothing$ysicsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...