Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Measurement in Classical Mechanics

+ 5 like - 0 dislike
4616 views

Seeing difficulties in finding an equation describing the state vector collapse in QM, I wonder how measurement or observation is described in Classical Mechanics or Classical Electrodynamics. Is there a special equation for that?

asked Jun 17, 2014 in Theoretical Physics by Vladimir Kalitvianski (102 points) [ no revision ]

There is no equation needed to describe a "state vector collapse" in QM as there is no state vector collapse needed to do QM ...

@Dilaton: Then why S. Weinberg is trying to write something? He feels that this part of QM is missing. And I see that this part is missing in CM too.

Concerning this specific point, S. Weinberg is wrong, as all people who think there is something is  wrong/missing in/etc with QM are. As Lumo has often explained on TRF, they are wasting their time and effort, which is an unfortunate trend in certain part of physics these days ...

@Dilaton: Lumo is a good physicist, but he is wrong either in many points. We have to ask experimentalists about their way of "observing" things in CM and we have to understand what is implicitly is implemented in our theories.

Yes, there have not been any contradictions observed between QM (as it is!) and the corresponding measurements it predicts, and there will not. Experimentalists know this, whereas it is rather other people who think QM has to be improved, interpreted, complemented, or whatever... instead of just shut up and calculate to see that QM perfectly agrees with exerimant.

@Dilaton: No one experimentalist can say how the measurement device choses this or that value from all the possibilities.

Measurement is only described in a similar way to quantum mechanics within classical statistical mechanics. If you look at 19th century literature, you will find analogous (but much simpler to resolve) debates about the meaning of probability, and determinism, when coupled with our lack of knowledge of the microstate the system happens to be in.

5 Answers

+ 5 like - 0 dislike

Measurement appears as a process in classical statistical mechanics. In this formalism, the basic objects are not the phase space location, but the probability distribution $\rho$ on the state space. The fundamental Hamilton equation states that

$\partial_t \rho + H_p \partial_x \rho - H_x \partial_p \rho = 0 $

schematically, there is more than one degree of freedom, so you need to interpret the second and third term as summed, but this is obvious--- it is just saying that the probability is an abstract fluid, and the fluid of probability on phase space flows along the paths of the Hamiltonian trajectories, without any diffusion.

The Liouville theorem tells you the flow is incompressible, so you conclude that the volume of phase space is preserved, or restated in probability language, that the entropy of any probability distribution is constant in any Hamiltonian system over time.

The probability distribution in classical mechanics is the closest analog to the state vector, the time evolution equation is analogous to the Schrodinger equation. It is clear that it must be exactly linear, because of the hidden-information property of probability. I will state this as the coin principle.

Coin principle: If you have 50% probability of one probability distribution, and 50% probability of another probability distribution, i.e. if $\rho = \rho_1/2 +\rho_2/2$, then you can imagine flipping a coin at the beginning of time, and choosing $\rho_1$ or $\rho_2$ depending on the outcome, and the resulting distribution, if you don't know the outcome of the coin flip, is the same as evolving $\rho$.

The coin principle explains what linearity means--- it means that you can imagine learning a bit of information at the beginning of time, and then time-evolving, or learning the same bit at the end. The process of gaining/losing information is completely independent of the dynamical laws. This is what linearity means, the the probability is encoding hidden information, and the principles of probability formalize the idea that probability is describing knowledge of hidden information.

There are two entropy non-conserving unphysical processes which can happen to the abstract probability distribution, coarse graining and measurement, both of these lie outside the dynamical laws of motion in the naive formulation.

Coarse graining happens when the initial distribution gets spaghettified into densely covering a broader volume with lower density, and at some point, it is just sensible to switch over to describing the system with a higher entropy probability distribution, since the only loss of information is in practically uncomputable fine-details of the positions and momenta. In typical systems described by real numbers, if there is exponential separation of trajectories along some directions (and necessarily exponential contraction along other directions), then the coarse graining is natural when you place some practical cut-off for the precision of your positions and momenta.

The coarse-graining process is identified in statistical mechanics as the source of entropy increase in statistical mechanics. This is an experimentally observed effect, but it is contradicted superficially by the law of conservation of entropy. In the 19th century, this was the source of philosophical and interpretation squabbles about statistical mechanics and the law of entropy increase.

The statistical measurement process is that you, as an observer, can learn something about the phase space location of the system. In this case, you use your knowledge to reduce the probability distribution to a smaller volume, reducing the entropy of the system. This second process associated with observation also has no direct analog in the equations of motion, it is instead included in the abstract considerations of the meaning of probability.

If you model the observer as an external system doing computation according to physical law, with certain internal variables which encode the information of the observer, and also interacting with the given system in a Hamiltonian way, the overall conservation of entropy means that in order to gain a bit of information about the system, to observer needs to produce a log 2 of entropy somewhere else, usually by dumping kT log 2 heat into a thermal environment at temperature T. Then the collapse of the observed system is in a two-step process, first you classically entangle (i.e. correlate) the bit-value part of the state with the observer and the observer stores this bit value, reduces the entropy of the system (by learning about it) and increase the entropy of the environment commensurately.

The ability to reinterpret the "collapse" of the probability distribution of S as this kind of physical process of correlation is widely agreed upon to remove the philosophical problems in the interpretation of statistical mechanics, if there ever was such a problem. The process of an observer learning about the probability distribution of the world can be modelled in some Baysian probabilistic fasion, and in this way, you get a sensible interpretation of classical statistical mechanics.

If you try to do the same thing with quantum mechanics, you run into the issue that the reduction of information is only philosophically trouble-free when it obeys the coin-principle, when it is probability. The quantum amplitudes are formally different from probability, they can be positive or negative, so that they are not naively interpretable as ignorance of hidden variables. Amplitudes don't obey the coin-flip principle--- the mixed state produced by a coin flip is different from the coherent superposition state in quantum mechanics. This means that you can't interpret a small quantum system as a probability evolution over the same state space, it's just something different than that.

But when you look at a large quantum system, like people, we notice that the superpositions turn into standard probabilities, meaning that once you make a measurement, and learn something, the result does obey the coin-flip principle. Meaning that the result of measuring the spin of an electron in an equal superposition of spin-states is, after measurement, exactly indistinguishable from flipping a coin and considering an electron in a definite spin-state aligned with the device measurement axis.

So somehow quantum mechanics turns into classical probability for large systems. This is the measurement problem in this formulation--- how does a quantum formalism include a classical probabilistic one?

There is a minor physics problem here, which is to make sure that the result is consistent, that you won't observe interference effects for macroscopic objects. This is decoherence, and everyone agrees that macroscopic systems don't have measurable coherence effects in any practical way which can be used to refute the statistical interpretation.

A possible resolution to this is to simply consider that the statistical aspects simply come from the infinite system limit, meaning that the reduction is asymptotic. When systems are very large, and the decoherence is more and more perfect, you approach in a limiting way classical probability, so you can do probabilistic reductions, and you are doing them on the asymptotic limit of infinite size and infinite time. Because the interference effects are hard to observe even with moderate size, you can apply classical probability in even the tinest classical realm, and do reductions with squares of wavefunction values replaced by probabilities without any philosophical problem.

A slogan for this is to say "our knowledge is defined asymptotically", meaning that the information about our world is inherently an asymptotic object defined in the limit of infinite size quantum systems.

This, in my opinion, is a fully consistent philosophical point of view, and it is either Copenhagen, or Many-Worlds, or Shut Up and Calculate, depending on the philosophical words you associate to it. The only issue for me personally is that it involves an infinite size limit, to get rid of the coherence effects. It is annoying to think that all our information about the world is somehow asymtotic, that it doesn't make sense at any finite size, because if we learn some information, like say "The exchange rate for Euros to Dollars is 1:1.1", this information reduction is not consistent, because it doesn't obey the coin-flip principle exactly, meaning that the wavefuncion is not interpretable as ignorance of hidden variables. So learning about the exchange rate is not reducing wavefunction information, it is only reducing a classical probability.

What this means, to see how unphysical it is, is that you could (in principle) later have a quantum macroscopic interference effect, where another world with a different exchange rate takes the place of this world, interfering away the current world with an equal negative amplitude contribution. Of course, after this happens, the whole previous world is not there, so we would never know, so it is not clear that it is even meaningful to speak about these enormous decoherence events. This type of nonsense can't happen in classical probability, there is no interfering away of possibilities with a positive probability.

This is the philosophical puzzles in quantum mechanics--- you are treating a quantity which is not a probability, the wavefunction, as if it was encoding a probability, and the result is only consistent in the strict infinite system limit. it is disconcerting, it suggests that there is a real philosophical consistency issue with quantum mechanics.

An alternate resolution to this can be that the reduction is not at infinite system limit, at least not at strict infinite size. One can insist that the real laws of nature obey the coin-flip principle, that they should be considered as revealing values of hidden variables. If one takes this position, and asserts that the probability description is correct, and not asymptotically emergent, then you are led to consider that quantum mechanics is just a strange and convoluted way of describing a weird kind of probability distribution when the system is large.

I don't know if this works, but I don't think it's implausible, because you can see many situations where a probabilisitic evolution looks approximately reversible, and in this case, it is concievable that the evolution can be a rough poor-man's quantum mechanics. If the number of hidden variables is not obscenely large, such a thing is a real new theory, because it necessarily must fail to reproduce quantum mechanics exactly, because it can't reproduce the effective exponentially large search in quantum factoring. If the number of hidden variables is infinite, it's philosophy again, because Bohm's theory with it's absurdly enormous size, reproduces quantum mechanics exactly, although the method it uses is kind of ridiculous on physical grounds.

answered Jun 18, 2014 by Ron Maimon (7,730 points) [ revision history ]
edited Jun 18, 2014 by Ron Maimon

This is an excellent answer. 

+ 3 like - 0 dislike

Factually the measurement process in CM is implied to be continuously present without any influence on the observed motion.

I would like to explain in details what I mean by saying this. Classical mechanics is about motion of macroscopic bodies. We observe macroscopic bodies with help of light - exchange of energy in the form of many photons. A macroscopic body has many "internal" degrees of freedom, which are quantized to this or that extent. So, what we observe is transitions between those internal excited states of the macroscopic body. Factually the body changes all the time, but we sum up signals from it during short periods and assign the photon flux to an "independent point" - geometric center - with help of this averaging. We turn a complicated physical process into a simple mathematical picture of moving a point in space. It is an inclusive picture, experimentally. The corresponding equations for average "position" of unchanging ("elementary", point-like) body become simple and numerically soluble. The very notions of space and time are inclusive too, they are not that fundamental.

The exchanged energy is usually so small with respect to the "kinetic energy" of the body that it does not influence the observation process. It looks as if the observer might close the eyes, and everything would be occurring in the same way. Thus, this strong inequality and inclusive character of our data makes it possible to attain some "certainty" or "determinism" in CM. We get into the realm of (simple) mathematics (space- or space-time geometry) with its nice logic, but we are completely out of base about the real physics - observation implies modification of the body, the body is never the same and it is not what we see.

I hope I am clear here: the "measurement process" is not described by CM equations, but is encoded in the equation form and in simplified notions of CM. (To be continued.)

answered Apr 14, 2015 by Vladimir Kalitvianski (102 points) [ revision history ]
edited Apr 14, 2015 by Vladimir Kalitvianski

 It looks as if the observer might close the eyes, and everything would be occurring in the same way.

The same is the case in quantum mechanics. Nothing depends on the observer. But of course what you read off an instrument depends on how the instrument is coupled to what is measured. Even in classical mechanics, this is treated as a stochastic process if your classical system is chaotic enough. For example, partial polarization is handled this way.

+ 2 like - 0 dislike

Measurable quantities are described by time evolution of observables classically. This is why the Heisenberg picture is useful. Exactly the same expectation value is obtained when we leave the state fixed
as $|\psi(0)\rangle$ and allow the observable to evolve in time as $U(t)^†\hat{O}U(t)$ (Heisenberg picture) as opposed to the Schrodinger picture which is the state that evolves in time. The equations of motion obtained from the Heisenberg picture look familiar from the equivalent ones in Classical Mechanics hence we can better demonstrate similarities (or differences) between QM and CM. Historically, the Heisenberg picture was developed (by Heisenberg)
for his matrix mechanics formulation of quantum theory, developed in parallel to
wave-function quantum mechanics.

answered Jun 18, 2014 by PhotonicBoom (40 points) [ revision history ]
edited Jun 18, 2014 by PhotonicBoom

It is not convincing to me. The operator equations are written factually for the matrix elements. There is no equation in QM describing "collapse" of non-diagonal matrix elements into zero and "collapse" of diagonal matrix elements into non-zero for the "observed" one and into zero for the rest of them.

In CM observables are just implied observables and there is no equation of measurement ;  it goes without saying, I am afraid. If we do not write an equation of measurement in CM, then why we are trying to write it, so far unsuccessfully, in QM?

We do have equations of measurement in CM. All our equations are evolution equations of measurements. In QM, due to the probabilistic nature it is more useful most of the times to describe it as an evolution of the state (and then measure the observables)

@PhotonBoom: Frankly, I did not get it. Equation of motion of the Moon is an equation of the measurement process?
 

Yes, you are measuring the evolution of motion (velocity, position, angular momentum, these are all observables) with time.

@PhotonicBoom: Well, if I am measuring the evolution of motion with time, I am an experimentalist. And I want some equations of the measuring process, not the equations of motion themselves. The equations of measurement process should include interaction of the Moon with the measuring device. Newton equations do not desribe it. Newton equations are the same for a yellow and for a blue Moon.

Are you talking about the Lagrangian then?

@PhotonicBoom: No, Lagrangian (as well as the equations of motion) does not describe the measurement process in CM. Factually the measurement process in CM is implied to be continuously present without any influence on the observed motion.

+ 2 like - 0 dislike

There is no subtlety, as far as I see about measurement in Classical mechanics. You measure for instance the electric field by using a test charge. In the limit in charge goes to zero you get a undisturbed measurement for the electric and magnetic field.

answered Apr 15, 2015 by Prathyush (705 points) [ no revision ]
+ 0 like - 0 dislike

My understanding is that every physical state in classical mechanics is an observable state. Hence, complications involving state collapse do not arise. One can model a measuring device and its interactions with some subsystem, but, if one wants to introduce notions of uncertainty, in the end it comes down to ad hoc assumptions (they need to be based on the observer's experience/intuition) about which states of the measuring device can be distinguished. In most situations, it seems better to simply make those ad hoc assumptions on the subsystem, dispensing with the complication of considering the measuring device itself. Perhaps this parallels the notion of psycho-physical parallelism.

answered Jun 18, 2014 by anonymous [ no revision ]

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsOverfl$\varnothing$w
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...